SlideShare a Scribd company logo
1 of 69
Download to read offline
 
HIGH AVAILABILITY
VIRTUALIZATION WITH
PROXMOX
ORIOL IZQUIERDO
JUNIOR PILLIGUA
ASIX-2
 
 
 
INDEX
INTRODUCTION 3
TECHNOLOGY 4
WHAT IS VIRTUALIZATION AND WHAT VIRTUALIZATION BENEFITS US. 5
VIRTUALIZATION SOLUTIONS BASED ON HYPERVISOR 7
OBJECTIVES 8
STRUCTURE AND NETWORK DIAGRAM 9
MANUAL INSTALLATION PROXMOX 11
CREATE CLUSTER 19
CONFIGURATION BONDING 21
PROBLEMS WITH KVM 28
INSTALL DRBD 28
CONFIGURATION NFS 33
CONFIGURATION HEARTBEAT 34
VERIFICATION 39
Security access 45
MIGRATION VM TO OTHER NODE 47
MANUAL INSTALLATION FreeNAS 50
CREATE NFS 54
scheduled backup 58
Backup for command line 62
Restoration Backup for command line 65
CONCLUSIONS 66
BIBLIOGRAPHY 67
 
 
 
2 
 
 
INTRODUCTION
We are a medical center that we are growing every day, we have several specialties and we
also have 100 employees and 50 physicians, but we are currently expanding the template. We
can also say that we have a large number of patients. But lately our patient is complaining
because our system has fallen, and we haven't been able to work normally. For this reason, we
have come to the conclusion of mounting a high availability cluster to be able to work more
tranquility and so we will continue to offer a better service to our patient and we will be able to
continue growing.
In this project we do the installation and configuration of a high availability cluster with
PROXMOX, we will use three servers in which the Zentyal is installed, where we can configure
the Active Directory or configure the DNS or DHCP. We will have the three servers in operation
if one of the servers falls, with the high availability would not have to happen anything, would
have to continue running with the other two servers. In our network we will have a
"management" PC which only he will have access to administer the Proxmox. We will have two
NAS servers where the data will be synchronized on both disks. Finally we will make a backup
of the machines Proxmox, in theory this copy would have to be in a different place but we can
not do this.
 
 
 
3 
 
 
TECHNOLOGY
Proxmox Virtual Environment​: ​is an open source
server virtualization environment. It is in
Debian-based GNU/Linux distributions with a
modified version of the RHEL Kernel and allows the
deployment and management of virtual machines and containers.
The main features of Proxmox VE are:
● is open source
● allows live migration
● has a high network bridge enabling
● OS construction templates
● scheduled backups
● Line tools Commands
Bonding: is a method of combination two or more network interfaces into a single interface.
This increases network processing, bandwidth, and redundancy. If an interface is disconnected
or inactive, the other interface keeps network traffic alive. This linkage is used in situations
where you need redundancy, fault tolerance, or balance in the network load.
High-availability cluster​: A high-availability cluster is a set of two or more machines that are
characterized by maintaining a series of shared services and constantly monitoring each other.
If a hardware failure occurs on any of the cluster machines, the high-availability software is
capable of automatically booting services into any of the other cluster-forming nodes.
KVM: ​A complete virtualization solution that allows us to run multiple virtual machines
(Windows, Linux, Unix 32 or 64 bits), in which each virtual machine will have its own virtual
hardware
 
 
 
4 
 
 
OpenVZ: Container-based virtualization for LINUX. Proxmox allows us to execute multiple
"instances" of isolated operating systems on a single physical server, with the advantage that
each MV uses the Hardware resources of the host server.
DRBD: ​(Distributed Replicated Block Device) Allows remote mirror in real time (equivalent to
RAID-1 network), something very difficult to achieve with other systems.
NFS: The Network File System (NFS) is a client/server application that allows a computer user
to view and optionally store and update files on a remote computer as if they were on the
user's own computer
HeartBeat: Application to provide high availability to two servers, which will share an IP
address. One will be the asset and the other the passive.
WHAT IS VIRTUALIZATION AND WHAT VIRTUALIZATION BENEFITS US.
Virtualization is the ability to allow several logical machines in a single physical. This through
specialized softwares or in more professional environments like the one that this project deals
with.
Each virtual machine is isolated from the rest, it can create a farm of independent servers in a
single physical machine, with the decrease of economic and energetic cost that this supposes.
This technology is based on hypervisor.
The hypervisor This technology is between the HW of the real server and the virtual machines.
The hypervisor assigns the amount of access that each virtual machine has to the processor,
memory, hard disk and other resources.
The son benefits:
 
 
 
5 
 
 
Highest utilization rates
Before virtualization, server utilization rates and storage in the company's data centers were
less than 50%. Through virtualization, workloads can be encapsulated and transferred to idle
or unused systems.
Resource Consolidation
Virtualization allows the consolidation of multiple resources. Beyond consolidating storage,
virtualization offers an opportunity to consolidate the architecture of systems, application
infrastructure, data and database, interfaces, networks, desktops, and even business
processes, resulting in cost savings and greater efficiency .
Less energy use
The electricity required to run in business class data centers is no longer available in unlimited
supplies, and the cost is spiraling upward. Using virtualization, you can reduce your total
energy consumption and save money in a meaningful way.
 
 
 
6 
 
 
Retrench of space
The dimensions of the servers is a serious problem in most companies, with virtualization from
which you can save physical machines.
Disaster recovery / business continuity
Virtualization increases the availability of the services installed in it. And all virtualization tools
new solutions solutions for disaster recovery.
Ease of maintenance and supervision of critical systems
All the virtualization tools in Hypervisor incorporate different tools for its correct configuration
and maintenance. Some of these pass through tools such as monitoring, notification by email,
notice of extreme use of some HW component. Being able to have a summary of our structure
in an easy and fast way, since these tools allow the monitoring and configuration of several
nodes.
WHAT IS KVM?
It is a complete virtualization solution for Linux on x86 hardware with virtualization extensions
(Intel VT or AMD-V). With KVM, we can run multiple virtual machines without modifying the
Linux or Windows images that are running. KVM is part of RedHat Emerging Technologies
VIRTUALIZATION SOLUTIONS BASED ON HYPERVISOR
VMWARE ESXi HYPER-V XENSERVER PROXMOX VE
SO SUPPORTED WINDOWS /
LINUX
WINDOWS /
LINUX (LIMITED
SUPPORT)
WINDOWS /
LINUX
WINDOWS /
LINUX
LICENSE PAYMENT PAYMENT
(INCL.
WSERVER)
PAYMENT FREE
CONTAINERS NO NO NO YES - OPENVZ
 
 
 
7 
 
 
MAXIMUM
HARDWARE
SUPPORTED
128 CPU / 4TB
RAM / 62TB
ARCHIVO
VMDK
240 CPU / 12 TB
RAM / 128 AD.
RED / 64 TB
DISCO VIRTUAL
32 CPU / 1.5 TB
RAM / 2TB HDD
/ 7 T. RED
160 CPU / 2TB
RAM / 4000 T.
RED
SAN STORAGE YES YES YES YES
MIGRATION OF VM
IN HOT
YES YES (LIMITED) YES YES
SNAPSHOTS YES YES YES YES
We have chosen Proxmox because it is an open source software. And after reading several
articles and forums, we have been singing for this software, and so in the company we will
have good software to have high availability.
OBJECTIVES
Our main goal is to create a secure, high-availability, fully open-source virtualized system. To
do this we use two servers with PROXMOX with dual network card and two Nas servers that
are synchronized between them. With this system we will achieve high availability in our
company with the purpose that we can work properly and we do not have problems in which a
server falls down and spend hours without working.
 
 
 
8 
 
 
STRUCTURE AND NETWORK DIAGRAM
 
 
 
9 
 
 
IP MACHINE REASON
192.168.124.200 PROXMOX 01 SERVER PROXMOX
BELONGING TO THE NODE
192.168.124.205 PROXMOX 02 SERVER PROXMOX
BELONGING TO THE NODE
192.168.124.215 PROXMOX 03 SERVER PROXMOX
BELONGING TO THE NODE
192.168.124.230 DRBD 01 SERVER WHERE WE
ACCOMMEND THE VIRTUAL
MACHINES AND BACKUPS
OF OUR SYSTEM
192.168.124.231 DRBD 02 SERVER WHERE WE
ACCOMMEND THE VIRTUAL
MACHINES AND BACKUPS
OF OUR SYSTEM
192.168.124.216 Ubuntu Server PROXY SERVER AND WEB
SERVER
192.168.124.212 SRV BACKUPS - NFS SERVER WHERE BACKUPS
OF OUR SYSTEM
 
 
 
10 
 
 
MANUAL INSTALLATION PROXMOX
Once we have downloaded the ISO and boteado the ISO. We start the installation, we will find
a menu with different options.
● Install Proxmox
● Install Proxmox, but in debug mode
● Retrieve the boot sector of the disk
● Check the integrity of the computer's memory.
In our house we choose the first option: Install Proxmox VE
 
 
 
11 
 
 
The wizard that will join us during the entire installation process is started.
In this case, it shows us the license agreement, which we must read and accept before
proceeding with the process.
Now we have to choose the hard drive where you will install Proxmox VE
 
 
 
12 
 
 
Here we have to make the time zone settings.
After the time zone, we should write the password so that later we can login as root user.
With this we will have finished the setup of the installation. From here the wizard configures the
disk and installs the system
 
 
 
13 
 
 
Once installed, you will be ready to restart the system and start using it.
When we start the system the first thing we will see is a blue screen, with different options.
 
 
 
14 
 
 
Here we have a text mode screen, which is requested by the user and the password, we can
login from a Web interface.
Click to access
 
 
 
15 
 
 
We start with Web interface with the user and password.
Now we have the management interface of Proxmox VE
 
 
 
16 
 
 
UPDATING /etc/hosts
We entered the/etc/hosts file of the three servers and added.
UPDATING SYSTEM AND TEMPLATES
Although we have downloaded the iso recently it is always welcome to update the system with the
command:
# apt-get update && apt-get dist-upgrade
 
 
 
17 
 
 
The templates are images of operating systems already prepared for startup. At the first start, they will
ask us for a series of parameters depending on their purpose. An example of this is the web
https://www.turnkeylinux.org​. In order to use this simple resource we must update the list that by default
includes Proxmox.
The list can be seen with the command
# pveam available
If we execute it before updating, only those that are of the system appear.
Now we update with the command
# pveam update
 
 
 
18 
 
 
CREATE CLUSTER
To create the cluster we have to create a node that will be the main, in our case the Node 1 is
the main.We create the cluster with the command pvem and entering the name that want to
give, in our case "clusterasix":
With the cluster already created, the next step is to connect to the other two nodes and by
means of the command ​#​ ​PVECM add​ and the IP of the Node 1 where the created cluster is
located.
 
 
 
19 
 
 
When we have added both nodes we can check the status:
If we enter the Web interface we can see the three nodes.
 
 
 
20 
 
 
CONFIGURATION BONDING
To configure the bonding we must first edit the bridge interface, which has the name "Vmbr0".
We change the interface used by default, the bonding that we create with name Bond0. The
rest of the parameters are left the same
Now click on Linux Bond
 
 
 
21 
 
 
We have to add all interfaces to form bonding, and choose the bond mode, which will be
"balance-RR".
final configuration of the bonding.
 
 
 
22 
 
 
Here we can see that it is created correctly. The bonding configuration has to be on all three
servers.
 
 
 
23 
 
 
 
 
 
24 
 
 
INSTALL CEPH
to allow the movement of virtual machines between nodes, it is necessary to install and
configure CEPH.
It is a package that we install in all the nodes and in one of them (the main one) we indicate the
network in which it works. This is how CEPH works so easy.
Its installation in all the nodes of our network is:
# pveceph install --version luminous
Activate the Ceph-Monitor service
#pveceph init --network x.x.x.x./X
#pveceph createmon
 
 
 
25 
 
 
We start the service in all the nodes
From the web interface we start the visualization of all these ceph nodes.
Node > Ceph > Monitor > Create
 
 
 
26 
 
 
 
 
 
27 
 
 
PROBLEMS WITH KVM
We have detected a problem when booting our test vm. When starting, it gave us the following
error:
We have researched. We have seen that the KVM can not be activated in the VMs and in the
real machine at the same time. We activate the KVM mode to ensure a clean installation.
 
 
 
28 
 
 
INSTALL DRBD
Allows remote mirror in real time (equivalent to RAID-1 network), something very difficult to
achieve with other systems. DRBD creates a DRBD0 block device accessible from both
servers. The primary server is the one that has access on the DRBD0 device: Every time you
write something in drbd0 it writes it to the physical partition and those same data are sent by
TCP/IP to the secondary server getting both physical partitions synchronized, Exactly like a
RAID-1.
With this software synchronize the information, if one server falls the other will rise so we do
not lose the information
First of all we have installed two Debian servers and we have added one hard disk of the same
size to each server.
 
 
 
29 
 
 
Once we have the two servers, with the Fdisk partition the disks.
Now proceed to install DRBD on the two servers
 
 
 
30 
 
 
Enter the configuration file DRBD, add the device, the disk where we create the storage, the IP
address of the two servers.
 
 
 
31 
 
 
Initialize metadata Storage on both servers.
Activate DRBD on both servers
Configure DRBD1 as Primary server, execute the following command on the server one.
We can see in the image that this as primary. But missing in second disc, at first we had
enough problems to synchronize them.
In server 2 we come out as secondary
 
 
 
32 
 
 
The primary disk will save all information and the child will be waiting for the primary to drop.
CONFIGURATION NFS
Install the NFS server on both servers:
Delete NFS Startup scripts:
We will create an ext3 file system in it and mount it in the directory /data. This has to be done
only on server 1.
 
 
 
33 
 
 
We verify that it has been mounted
CONFIGURATION HEARTBEAT
We will use hearbeat because we had problems when the primary server fell did not jump the
secondary with this software we have created a virtual IP as well when the primary falls jumps
to the secondary information.
Let's look at the HeartBeat configuration. We'll install HeartBeat on both servers. HeartBeat
controls the whole issue: it launches and stops NFS on both servers, monitors and activates
the secondary server if the primary fails and ensures that the NFS server responds to the
virtual IP.
 
 
 
34 
 
 
install to HeartBeat
/etc/heartbeat/ha.cf: We will create this identical file on both servers:
etc/heartbeat/haresources: We will create this identical file on both servers:
This file specifies the name of the primary server, the virtual IP, the resource DRBD defined
in/etc/drbd.conf (R0), the DRBD device (/dev/drbd0,/Data, ext3), and the Server to monitor
(NFS-kernel-server).
/etc/heartbeat/authkeys: We will create this identical file on both servers:
 
 
 
35 
 
 
Here we define the authentication mechanism (MD5) and the password so that the two
heartbeat demons of the servers are authenticate against each other. Only root must have
read permissions on/etc/heartbeat/authkeys so we will:
We started and checked
Once installed the systems go to the PROXMOX and add in storage add the storage NFS.
 
 
 
36 
 
 
We put the name, in server put the virtual IP and will be in /data
We can see the status of the storage
 
 
 
37 
 
 
Now we go to the primary server and we make a /data and we see that we leave the folders.
Now proc to install the Zentyal
 
 
 
38 
 
 
VERIFICATION
We will start by doing the test that if a server falls where we have hosted a virtual machine
pass to another server of the cluster.
As you can see in the photo the virtual machine is on the server proxmox03
 
 
 
39 
 
 
Now proceed to turn off the server proxmox03 and the virtual machine goes to the
PROXMOX01.
 
 
 
40 
 
 
Now the virtual machine is on the server proxmox02, working properly. We have timed and it
takes about 2 minutes to move to the other server.
 
 
 
41 
 
 
Now we will proceed to check the Nas servers. If the server Nas01 to raise the server Nas02.
we make a ​cat /proc/drbd​ and check that one of the servers is in primary
we see that the partition​ /data ​is mounted
and the second server will be secondary
 
 
 
42 
 
 
We turn off the main server and the secondary server will go to primary without affecting the
storage.
we see that the partition / data has been mounted on the server
Now we turn on the server that we have turned off and it will become the primary one as at the
beginning.
 
 
 
43 
 
 
We have problems because it does not start the services. we have created a start script, so
you can start the services automatically.
 
 
 
44 
 
 
Security access 
script that blocks access to the administration of PROXMOX. 
 
 
 
 
 
45 
 
 
Now we try to enter with an IP that is not .77 or .78 and this happens
 
 
 
46 
 
 
MIGRATION VM TO OTHER NODE
With Proxmox we can migrate one machine from one server to another. With this option we
can migrate a hot machine without having to turn it off when we want to perform a maintenance
task on the node on which the instance is running.
1. Node > VM/CT > right click > MIgrate
 
 
 
47 
 
 
*another option is to click on migrate in control panel of VM
2. ​the panel to migrate is so simple that it only has 1 parameter, which is the destination node.
It informs us of your status just below the option
3. It informs us about the status of the process
 
 
 
48 
 
 
4. Here we see the vm moved as it runs without problems
 
 
 
49 
 
 
MANUAL INSTALLATION FreeNAS
1. We start downloading the ISO FreeNAS and boot and start installing the program.
2. It alerts us that ada0 will be completely erased and that we won't be able to share data.
Click Yes
 
 
 
50 
 
 
3. Enter the root password.
4. Now we choose to be boot via BIOS
5. Start installing the FreeNAS
6.It shows that it has been installed correctly.
 
 
 
51 
 
 
7.Once installed restart to be able to start the program and to use it.
8.We will leave a menu to be able to configure.
 
 
 
52 
 
 
9.We enter with Web interface through IP
10.Check we can get in.
 
 
 
53 
 
 
CREATE NFS
We will create storage for backups.
1. For the creation of our NFS server we use the FreeNas installed in our network.
In the Services section we activate the NFS module.
2. In the section ​Sharing​> ​Unix NFS shares​ click on​ add Unix NFS​.
 
 
 
54 
 
 
3. Fill in the fields. The path will be the same route indicated in Storage
4. We verify that it has been created correctly
 
 
 
55 
 
 
5. Now in ​proxmox ​we will add this new NFS unit.
We select our​ Data Center​, and enter the ​Storage section. ​Open the add menu and select
the ​NFS option​. Since it is the type of our storage.
In this screen we fill in all the options. The fields to be filled are:
ID​: It will be what we call this storage
SERVER​: IP address of the NFS server
EXPORT​: directory path on the server
CONTENT​: We select what this storage will contain (backups, images, containers, ...)
NODOS​: Which nodes will have access to this NFS server
ACTIVATE​: To allow the use
MAX BACKS.​ How many backups will you keep from the vm / contenders. We recommend 5
 
 
 
56 
 
 
6. Now we have our new storage available
 
 
 
57 
 
 
scheduled backup 
1.First we add a new Storage since the backup always must do in an external device the rest
 
 
 
58 
 
 
2. On this screen we configure all the backup. In our case we decided to create a backup every
day, at 1.30 of the MV Zentyal. Your destination will be backups (an NFS mounted on
FreeNas).
An email will be sent each time the task is launched. Whether the backup is successful or not.
The mode will be instantaneous. So there will be no errors or nothing will happen if the MV is
starting and running
3. When we launch the backup task we get this screen
 
 
 
59 
 
 
4. This is the email that comes to us when the backup is done. This is the email that comes to
us when making the backup. Here we see 2 emails, one successful backup and the other
failed
 
 
 
60 
 
 
 
 
 
61 
 
 
Backup for command line 
We can make backups through the command line with the ​vzdump ​command. with the
command ​qmlist ​see the mv that there is right now in this node
options command:
--storage ​Specifies the storage where the backup will be saved
--mode ​(stop | suspend | snapshot):
stop
With stop the machine stops during the backup process
suspend
With openvz virtual machines, rsync is used to copy the virtual machine to a temporary
directory, then the machine is suspended, a second rsync copies the files and the operation of
the machine is resumed.
With machines qemu and kvm the operation is similar to stop but suspending / resuming the
machines instead of stopping and starting it.
snapshot
It makes use of LVM to carry it out. There is no need to stop the machine but additional space is
needed for the creation of the LVM.
--compress (0 | 1 | gzip | lzo) ​Compress the backup, by default it is lzo and 1 is the same as lzo
--dumpdir​ directory Sets the directory where the created backup will be saved
 
 
 
62 
 
 
--maxfiles ​1-N It establishes the maximum number of backups that can be stored on the same
machine, by default it is 1. So if, for example, we had put --maxfiles 2 and there were already two
backups in the directory, it would delete the oldest one making them want two backups.
If we go to the web browser we see that the task was launched
 
 
 
63 
 
 
 
 
 
if we go to the server of the NAS we see how in ​/mnt/vms​ are the backup copies
 
 
 
64 
 
 
Restoration Backup for command line 
 
you can restore backup copies by command line and by graphical interface 
Through the command line we have two applications, vzrestore for vm openVz. 
vzrestore [--storage storage_name] file vmid
 
 
 
65 
 
 
CONCLUSIONS
This project has been useful for us to learn about high availability in virtualization.
we can say that the installation of proxmox and configuration of the high availability cluster has been
easy to perform. From here the thing has changed, when synchronizing the two NAS servers we have
had enough problems because we can not synchronize it, so that when the main one falls, the
secondary will be the main one and not lose the information. we have spent a lot of time researching
and in the end we have managed to create a ​virtual ip,​ we have done it with ​HeartBeat​, what it does is
monitor and activate the secondary server if the
first it fails and the ​DRBD ​server responds in the virtual ip.
The biggest problem we have had is that the VM does not get internet since it does not reach the
gateway​.
We have detected problems with the mv if we put the local storage and the mv moves. The file is lost
and we are able to recreate that file.
Server virtualization is simple. But you have to have certain security measures. As an example, place
backups externally. Monitor the network and its IP configuration. The Proxmox system is simple at the
VM level. But it is complicated at the network level in high availability.
We found this project interesting. We have moved quite well with the Proxmox configuration. Not so
with DRBD and its configuration. DRBD and heartbeat services have problems at the time of initiation.
We have resolved this as we have explained in this document.
 
 
 
66 
 
 
BIBLIOGRAPHY
https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster
https://pve.proxmox.com/wiki/High_Availability_Cluster_4.x
https://es.wikipedia.org/wiki/Alta_disponibilidad
https://administradoresit.wordpress.com/2015/02/19/instalacion-proxmox/
https://enavas.blogspot.com.es/2016/12/proxmox-plantillas-y-contenedores.html
https://www.youtube.com/watch?v=-8SwpgaxFuk
DRBD:
https://docs.linbit.com/
https://www.theurbanpenguin.com/drbd-pacemaker-ha-cluster-ubuntu-16-04
http://www.estrellateyarde.org/virtualizacion/mirror-remoto-con-drbd
https://wiki.pandorafms.com/index.php?title=Pandora:Documentation_es:DRBD
https://www.linux-party.com/29-internet/9044-configurar-un-servidor-raid1-en-red-con-drbd-en-linux-1-d
e-2
https://sigterm.sh/2014/02/01/highly-available-nfs-cluster-on-debian-wheezy/
https://www.sebastien-han.fr/blog/2012/04/30/failover-active-passive-on-nfs-using-pacemaker-and-drbd
http://www.linux-admins.net/2014/04/deploying-highly-available-nfs-server.html
FIREWALL
http://nihilanthlnxc.cubava.cu/2015/09/04/cortafuegos-de-proxmox-ve/
BACKUPS
https://uninformaticoenisbilya.blogspot.com.es/2015/09/creacion-y-restauracion-de-backups-en.html
 
 
 
67 
 
 
 
 
 
68 
 
 
 
 
 
69 

More Related Content

What's hot

Room 3 - 7 - Nguyễn Như Phúc Huy - Vitastor: a fast and simple Ceph-like bloc...
Room 3 - 7 - Nguyễn Như Phúc Huy - Vitastor: a fast and simple Ceph-like bloc...Room 3 - 7 - Nguyễn Như Phúc Huy - Vitastor: a fast and simple Ceph-like bloc...
Room 3 - 7 - Nguyễn Như Phúc Huy - Vitastor: a fast and simple Ceph-like bloc...Vietnam Open Infrastructure User Group
 
VMware vSphere technical presentation
VMware vSphere technical presentationVMware vSphere technical presentation
VMware vSphere technical presentationaleyeldean
 
Building virtualised CloudStack test environments
Building virtualised CloudStack test environmentsBuilding virtualised CloudStack test environments
Building virtualised CloudStack test environmentsShapeBlue
 
Advanced Namespaces and cgroups
Advanced Namespaces and cgroupsAdvanced Namespaces and cgroups
Advanced Namespaces and cgroupsKernel TLV
 
Learn everything about IBM iNotes Customization
Learn everything about IBM iNotes CustomizationLearn everything about IBM iNotes Customization
Learn everything about IBM iNotes CustomizationIBM Connections Developers
 
Red Hat Enterprise Linux 8
Red Hat Enterprise Linux 8Red Hat Enterprise Linux 8
Red Hat Enterprise Linux 8Kangaroot
 
KVM tools and enterprise usage
KVM tools and enterprise usageKVM tools and enterprise usage
KVM tools and enterprise usagevincentvdk
 
OpenShift 4 installation
OpenShift 4 installationOpenShift 4 installation
OpenShift 4 installationRobert Bohne
 
NSX-T Architecture and Components.pptx
NSX-T Architecture and Components.pptxNSX-T Architecture and Components.pptx
NSX-T Architecture and Components.pptxAtif Raees
 
Alexei Vladishev - Zabbix - Monitoring Solution for Everyone
Alexei Vladishev - Zabbix - Monitoring Solution for EveryoneAlexei Vladishev - Zabbix - Monitoring Solution for Everyone
Alexei Vladishev - Zabbix - Monitoring Solution for EveryoneZabbix
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep DiveRed_Hat_Storage
 

What's hot (20)

Room 3 - 7 - Nguyễn Như Phúc Huy - Vitastor: a fast and simple Ceph-like bloc...
Room 3 - 7 - Nguyễn Như Phúc Huy - Vitastor: a fast and simple Ceph-like bloc...Room 3 - 7 - Nguyễn Như Phúc Huy - Vitastor: a fast and simple Ceph-like bloc...
Room 3 - 7 - Nguyễn Như Phúc Huy - Vitastor: a fast and simple Ceph-like bloc...
 
VMware vSphere technical presentation
VMware vSphere technical presentationVMware vSphere technical presentation
VMware vSphere technical presentation
 
Windows 2019
Windows 2019Windows 2019
Windows 2019
 
Building virtualised CloudStack test environments
Building virtualised CloudStack test environmentsBuilding virtualised CloudStack test environments
Building virtualised CloudStack test environments
 
Advanced Namespaces and cgroups
Advanced Namespaces and cgroupsAdvanced Namespaces and cgroups
Advanced Namespaces and cgroups
 
Learn everything about IBM iNotes Customization
Learn everything about IBM iNotes CustomizationLearn everything about IBM iNotes Customization
Learn everything about IBM iNotes Customization
 
Red Hat Enterprise Linux 8
Red Hat Enterprise Linux 8Red Hat Enterprise Linux 8
Red Hat Enterprise Linux 8
 
ansible why ?
ansible why ?ansible why ?
ansible why ?
 
Container Networking Deep Dive
Container Networking Deep DiveContainer Networking Deep Dive
Container Networking Deep Dive
 
KVM tools and enterprise usage
KVM tools and enterprise usageKVM tools and enterprise usage
KVM tools and enterprise usage
 
Basic 50 linus command
Basic 50 linus commandBasic 50 linus command
Basic 50 linus command
 
Ansible
AnsibleAnsible
Ansible
 
VMware vSphere
VMware vSphereVMware vSphere
VMware vSphere
 
OpenShift 4 installation
OpenShift 4 installationOpenShift 4 installation
OpenShift 4 installation
 
NSX-T Architecture and Components.pptx
NSX-T Architecture and Components.pptxNSX-T Architecture and Components.pptx
NSX-T Architecture and Components.pptx
 
Alexei Vladishev - Zabbix - Monitoring Solution for Everyone
Alexei Vladishev - Zabbix - Monitoring Solution for EveryoneAlexei Vladishev - Zabbix - Monitoring Solution for Everyone
Alexei Vladishev - Zabbix - Monitoring Solution for Everyone
 
Linux introduction
Linux introductionLinux introduction
Linux introduction
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep Dive
 
SDN OpenDaylight
SDN OpenDaylightSDN OpenDaylight
SDN OpenDaylight
 
Meetup 23 - 02 - OVN - The future of networking in OpenStack
Meetup 23 - 02 - OVN - The future of networking in OpenStackMeetup 23 - 02 - OVN - The future of networking in OpenStack
Meetup 23 - 02 - OVN - The future of networking in OpenStack
 

Similar to High availability virtualization with proxmox

Microsoft Windows Server 2012 Early Adopter Guide
Microsoft Windows Server 2012 Early Adopter GuideMicrosoft Windows Server 2012 Early Adopter Guide
Microsoft Windows Server 2012 Early Adopter GuideKingfin Enterprises Limited
 
LCNA14: Why Use Xen for Large Scale Enterprise Deployments? - Konrad Rzeszute...
LCNA14: Why Use Xen for Large Scale Enterprise Deployments? - Konrad Rzeszute...LCNA14: Why Use Xen for Large Scale Enterprise Deployments? - Konrad Rzeszute...
LCNA14: Why Use Xen for Large Scale Enterprise Deployments? - Konrad Rzeszute...The Linux Foundation
 
2011-03-15 Lockheed Martin Open Source Day
2011-03-15 Lockheed Martin Open Source Day2011-03-15 Lockheed Martin Open Source Day
2011-03-15 Lockheed Martin Open Source DayShawn Wells
 
Making clouds: turning opennebula into a product
Making clouds: turning opennebula into a productMaking clouds: turning opennebula into a product
Making clouds: turning opennebula into a productCarlo Daffara
 
Making Clouds: Turning OpenNebula into a Product
Making Clouds: Turning OpenNebula into a ProductMaking Clouds: Turning OpenNebula into a Product
Making Clouds: Turning OpenNebula into a ProductNETWAYS
 
OpenNebulaConf 2013 - Making Clouds: Turning OpenNebula into a Product by Car...
OpenNebulaConf 2013 - Making Clouds: Turning OpenNebula into a Product by Car...OpenNebulaConf 2013 - Making Clouds: Turning OpenNebula into a Product by Car...
OpenNebulaConf 2013 - Making Clouds: Turning OpenNebula into a Product by Car...OpenNebula Project
 
Improve performance and gain room to grow by easily migrating to a modern Ope...
Improve performance and gain room to grow by easily migrating to a modern Ope...Improve performance and gain room to grow by easily migrating to a modern Ope...
Improve performance and gain room to grow by easily migrating to a modern Ope...Principled Technologies
 
Project on squid proxy in rhel 6
Project on squid proxy in rhel 6Project on squid proxy in rhel 6
Project on squid proxy in rhel 6Nutan Kumar Panda
 
Linux Virtualization
Linux VirtualizationLinux Virtualization
Linux VirtualizationOpenVZ
 
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
LOAD BALANCING OF APPLICATIONS  USING XEN HYPERVISORLOAD BALANCING OF APPLICATIONS  USING XEN HYPERVISOR
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISORVanika Kapoor
 
2015 02-10 xen server master class
2015 02-10 xen server master class2015 02-10 xen server master class
2015 02-10 xen server master classCitrix
 
cloud virtualization technology
 cloud virtualization technology  cloud virtualization technology
cloud virtualization technology Ravindra Dastikop
 
Accelerating Hyper-Converged Enterprise Virtualization using Proxmox and Ceph
Accelerating Hyper-Converged Enterprise Virtualization using Proxmox and CephAccelerating Hyper-Converged Enterprise Virtualization using Proxmox and Ceph
Accelerating Hyper-Converged Enterprise Virtualization using Proxmox and CephBangladesh Network Operators Group
 
SYSAD323 Virtualization Basics
SYSAD323 Virtualization BasicsSYSAD323 Virtualization Basics
SYSAD323 Virtualization BasicsDon Bosco BSIT
 
A Survey of Performance Comparison between Virtual Machines and Containers
A Survey of Performance Comparison between Virtual Machines and ContainersA Survey of Performance Comparison between Virtual Machines and Containers
A Survey of Performance Comparison between Virtual Machines and Containersprashant desai
 
Docker and containers : Disrupting the virtual machine(VM)
Docker and containers : Disrupting the virtual machine(VM)Docker and containers : Disrupting the virtual machine(VM)
Docker and containers : Disrupting the virtual machine(VM)Rama Krishna B
 

Similar to High availability virtualization with proxmox (20)

Rhel7 vs rhel6
Rhel7 vs rhel6Rhel7 vs rhel6
Rhel7 vs rhel6
 
Microsoft Windows Server 2012 Early Adopter Guide
Microsoft Windows Server 2012 Early Adopter GuideMicrosoft Windows Server 2012 Early Adopter Guide
Microsoft Windows Server 2012 Early Adopter Guide
 
LCNA14: Why Use Xen for Large Scale Enterprise Deployments? - Konrad Rzeszute...
LCNA14: Why Use Xen for Large Scale Enterprise Deployments? - Konrad Rzeszute...LCNA14: Why Use Xen for Large Scale Enterprise Deployments? - Konrad Rzeszute...
LCNA14: Why Use Xen for Large Scale Enterprise Deployments? - Konrad Rzeszute...
 
Virtualization 101
Virtualization 101Virtualization 101
Virtualization 101
 
Handout2o
Handout2oHandout2o
Handout2o
 
2011-03-15 Lockheed Martin Open Source Day
2011-03-15 Lockheed Martin Open Source Day2011-03-15 Lockheed Martin Open Source Day
2011-03-15 Lockheed Martin Open Source Day
 
Making clouds: turning opennebula into a product
Making clouds: turning opennebula into a productMaking clouds: turning opennebula into a product
Making clouds: turning opennebula into a product
 
Making Clouds: Turning OpenNebula into a Product
Making Clouds: Turning OpenNebula into a ProductMaking Clouds: Turning OpenNebula into a Product
Making Clouds: Turning OpenNebula into a Product
 
OpenNebulaConf 2013 - Making Clouds: Turning OpenNebula into a Product by Car...
OpenNebulaConf 2013 - Making Clouds: Turning OpenNebula into a Product by Car...OpenNebulaConf 2013 - Making Clouds: Turning OpenNebula into a Product by Car...
OpenNebulaConf 2013 - Making Clouds: Turning OpenNebula into a Product by Car...
 
Improve performance and gain room to grow by easily migrating to a modern Ope...
Improve performance and gain room to grow by easily migrating to a modern Ope...Improve performance and gain room to grow by easily migrating to a modern Ope...
Improve performance and gain room to grow by easily migrating to a modern Ope...
 
Project on squid proxy in rhel 6
Project on squid proxy in rhel 6Project on squid proxy in rhel 6
Project on squid proxy in rhel 6
 
OpenStack on SmartOS
OpenStack on SmartOSOpenStack on SmartOS
OpenStack on SmartOS
 
Linux Virtualization
Linux VirtualizationLinux Virtualization
Linux Virtualization
 
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
LOAD BALANCING OF APPLICATIONS  USING XEN HYPERVISORLOAD BALANCING OF APPLICATIONS  USING XEN HYPERVISOR
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
 
2015 02-10 xen server master class
2015 02-10 xen server master class2015 02-10 xen server master class
2015 02-10 xen server master class
 
cloud virtualization technology
 cloud virtualization technology  cloud virtualization technology
cloud virtualization technology
 
Accelerating Hyper-Converged Enterprise Virtualization using Proxmox and Ceph
Accelerating Hyper-Converged Enterprise Virtualization using Proxmox and CephAccelerating Hyper-Converged Enterprise Virtualization using Proxmox and Ceph
Accelerating Hyper-Converged Enterprise Virtualization using Proxmox and Ceph
 
SYSAD323 Virtualization Basics
SYSAD323 Virtualization BasicsSYSAD323 Virtualization Basics
SYSAD323 Virtualization Basics
 
A Survey of Performance Comparison between Virtual Machines and Containers
A Survey of Performance Comparison between Virtual Machines and ContainersA Survey of Performance Comparison between Virtual Machines and Containers
A Survey of Performance Comparison between Virtual Machines and Containers
 
Docker and containers : Disrupting the virtual machine(VM)
Docker and containers : Disrupting the virtual machine(VM)Docker and containers : Disrupting the virtual machine(VM)
Docker and containers : Disrupting the virtual machine(VM)
 

Recently uploaded

Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelDeepika Singh
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Victor Rentea
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontologyjohnbeverley2021
 
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)Samir Dash
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdfSandro Moreira
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businesspanagenda
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vázquez
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
 
Introduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDMIntroduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDMKumar Satyam
 
AI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by AnitarajAI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by AnitarajAnitaRaj43
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamUiPathCommunity
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDropbox
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWERMadyBayot
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistandanishmna97
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfOrbitshub
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024The Digital Insurer
 

Recently uploaded (20)

Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Introduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDMIntroduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDM
 
AI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by AnitarajAI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by Anitaraj
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 

High availability virtualization with proxmox

  • 1.   HIGH AVAILABILITY VIRTUALIZATION WITH PROXMOX ORIOL IZQUIERDO JUNIOR PILLIGUA ASIX-2  
  • 2.     INDEX INTRODUCTION 3 TECHNOLOGY 4 WHAT IS VIRTUALIZATION AND WHAT VIRTUALIZATION BENEFITS US. 5 VIRTUALIZATION SOLUTIONS BASED ON HYPERVISOR 7 OBJECTIVES 8 STRUCTURE AND NETWORK DIAGRAM 9 MANUAL INSTALLATION PROXMOX 11 CREATE CLUSTER 19 CONFIGURATION BONDING 21 PROBLEMS WITH KVM 28 INSTALL DRBD 28 CONFIGURATION NFS 33 CONFIGURATION HEARTBEAT 34 VERIFICATION 39 Security access 45 MIGRATION VM TO OTHER NODE 47 MANUAL INSTALLATION FreeNAS 50 CREATE NFS 54 scheduled backup 58 Backup for command line 62 Restoration Backup for command line 65 CONCLUSIONS 66 BIBLIOGRAPHY 67       2 
  • 3.     INTRODUCTION We are a medical center that we are growing every day, we have several specialties and we also have 100 employees and 50 physicians, but we are currently expanding the template. We can also say that we have a large number of patients. But lately our patient is complaining because our system has fallen, and we haven't been able to work normally. For this reason, we have come to the conclusion of mounting a high availability cluster to be able to work more tranquility and so we will continue to offer a better service to our patient and we will be able to continue growing. In this project we do the installation and configuration of a high availability cluster with PROXMOX, we will use three servers in which the Zentyal is installed, where we can configure the Active Directory or configure the DNS or DHCP. We will have the three servers in operation if one of the servers falls, with the high availability would not have to happen anything, would have to continue running with the other two servers. In our network we will have a "management" PC which only he will have access to administer the Proxmox. We will have two NAS servers where the data will be synchronized on both disks. Finally we will make a backup of the machines Proxmox, in theory this copy would have to be in a different place but we can not do this.       3 
  • 4.     TECHNOLOGY Proxmox Virtual Environment​: ​is an open source server virtualization environment. It is in Debian-based GNU/Linux distributions with a modified version of the RHEL Kernel and allows the deployment and management of virtual machines and containers. The main features of Proxmox VE are: ● is open source ● allows live migration ● has a high network bridge enabling ● OS construction templates ● scheduled backups ● Line tools Commands Bonding: is a method of combination two or more network interfaces into a single interface. This increases network processing, bandwidth, and redundancy. If an interface is disconnected or inactive, the other interface keeps network traffic alive. This linkage is used in situations where you need redundancy, fault tolerance, or balance in the network load. High-availability cluster​: A high-availability cluster is a set of two or more machines that are characterized by maintaining a series of shared services and constantly monitoring each other. If a hardware failure occurs on any of the cluster machines, the high-availability software is capable of automatically booting services into any of the other cluster-forming nodes. KVM: ​A complete virtualization solution that allows us to run multiple virtual machines (Windows, Linux, Unix 32 or 64 bits), in which each virtual machine will have its own virtual hardware       4 
  • 5.     OpenVZ: Container-based virtualization for LINUX. Proxmox allows us to execute multiple "instances" of isolated operating systems on a single physical server, with the advantage that each MV uses the Hardware resources of the host server. DRBD: ​(Distributed Replicated Block Device) Allows remote mirror in real time (equivalent to RAID-1 network), something very difficult to achieve with other systems. NFS: The Network File System (NFS) is a client/server application that allows a computer user to view and optionally store and update files on a remote computer as if they were on the user's own computer HeartBeat: Application to provide high availability to two servers, which will share an IP address. One will be the asset and the other the passive. WHAT IS VIRTUALIZATION AND WHAT VIRTUALIZATION BENEFITS US. Virtualization is the ability to allow several logical machines in a single physical. This through specialized softwares or in more professional environments like the one that this project deals with. Each virtual machine is isolated from the rest, it can create a farm of independent servers in a single physical machine, with the decrease of economic and energetic cost that this supposes. This technology is based on hypervisor. The hypervisor This technology is between the HW of the real server and the virtual machines. The hypervisor assigns the amount of access that each virtual machine has to the processor, memory, hard disk and other resources. The son benefits:       5 
  • 6.     Highest utilization rates Before virtualization, server utilization rates and storage in the company's data centers were less than 50%. Through virtualization, workloads can be encapsulated and transferred to idle or unused systems. Resource Consolidation Virtualization allows the consolidation of multiple resources. Beyond consolidating storage, virtualization offers an opportunity to consolidate the architecture of systems, application infrastructure, data and database, interfaces, networks, desktops, and even business processes, resulting in cost savings and greater efficiency . Less energy use The electricity required to run in business class data centers is no longer available in unlimited supplies, and the cost is spiraling upward. Using virtualization, you can reduce your total energy consumption and save money in a meaningful way.       6 
  • 7.     Retrench of space The dimensions of the servers is a serious problem in most companies, with virtualization from which you can save physical machines. Disaster recovery / business continuity Virtualization increases the availability of the services installed in it. And all virtualization tools new solutions solutions for disaster recovery. Ease of maintenance and supervision of critical systems All the virtualization tools in Hypervisor incorporate different tools for its correct configuration and maintenance. Some of these pass through tools such as monitoring, notification by email, notice of extreme use of some HW component. Being able to have a summary of our structure in an easy and fast way, since these tools allow the monitoring and configuration of several nodes. WHAT IS KVM? It is a complete virtualization solution for Linux on x86 hardware with virtualization extensions (Intel VT or AMD-V). With KVM, we can run multiple virtual machines without modifying the Linux or Windows images that are running. KVM is part of RedHat Emerging Technologies VIRTUALIZATION SOLUTIONS BASED ON HYPERVISOR VMWARE ESXi HYPER-V XENSERVER PROXMOX VE SO SUPPORTED WINDOWS / LINUX WINDOWS / LINUX (LIMITED SUPPORT) WINDOWS / LINUX WINDOWS / LINUX LICENSE PAYMENT PAYMENT (INCL. WSERVER) PAYMENT FREE CONTAINERS NO NO NO YES - OPENVZ       7 
  • 8.     MAXIMUM HARDWARE SUPPORTED 128 CPU / 4TB RAM / 62TB ARCHIVO VMDK 240 CPU / 12 TB RAM / 128 AD. RED / 64 TB DISCO VIRTUAL 32 CPU / 1.5 TB RAM / 2TB HDD / 7 T. RED 160 CPU / 2TB RAM / 4000 T. RED SAN STORAGE YES YES YES YES MIGRATION OF VM IN HOT YES YES (LIMITED) YES YES SNAPSHOTS YES YES YES YES We have chosen Proxmox because it is an open source software. And after reading several articles and forums, we have been singing for this software, and so in the company we will have good software to have high availability. OBJECTIVES Our main goal is to create a secure, high-availability, fully open-source virtualized system. To do this we use two servers with PROXMOX with dual network card and two Nas servers that are synchronized between them. With this system we will achieve high availability in our company with the purpose that we can work properly and we do not have problems in which a server falls down and spend hours without working.       8 
  • 9.     STRUCTURE AND NETWORK DIAGRAM       9 
  • 10.     IP MACHINE REASON 192.168.124.200 PROXMOX 01 SERVER PROXMOX BELONGING TO THE NODE 192.168.124.205 PROXMOX 02 SERVER PROXMOX BELONGING TO THE NODE 192.168.124.215 PROXMOX 03 SERVER PROXMOX BELONGING TO THE NODE 192.168.124.230 DRBD 01 SERVER WHERE WE ACCOMMEND THE VIRTUAL MACHINES AND BACKUPS OF OUR SYSTEM 192.168.124.231 DRBD 02 SERVER WHERE WE ACCOMMEND THE VIRTUAL MACHINES AND BACKUPS OF OUR SYSTEM 192.168.124.216 Ubuntu Server PROXY SERVER AND WEB SERVER 192.168.124.212 SRV BACKUPS - NFS SERVER WHERE BACKUPS OF OUR SYSTEM       10 
  • 11.     MANUAL INSTALLATION PROXMOX Once we have downloaded the ISO and boteado the ISO. We start the installation, we will find a menu with different options. ● Install Proxmox ● Install Proxmox, but in debug mode ● Retrieve the boot sector of the disk ● Check the integrity of the computer's memory. In our house we choose the first option: Install Proxmox VE       11 
  • 12.     The wizard that will join us during the entire installation process is started. In this case, it shows us the license agreement, which we must read and accept before proceeding with the process. Now we have to choose the hard drive where you will install Proxmox VE       12 
  • 13.     Here we have to make the time zone settings. After the time zone, we should write the password so that later we can login as root user. With this we will have finished the setup of the installation. From here the wizard configures the disk and installs the system       13 
  • 14.     Once installed, you will be ready to restart the system and start using it. When we start the system the first thing we will see is a blue screen, with different options.       14 
  • 15.     Here we have a text mode screen, which is requested by the user and the password, we can login from a Web interface. Click to access       15 
  • 16.     We start with Web interface with the user and password. Now we have the management interface of Proxmox VE       16 
  • 17.     UPDATING /etc/hosts We entered the/etc/hosts file of the three servers and added. UPDATING SYSTEM AND TEMPLATES Although we have downloaded the iso recently it is always welcome to update the system with the command: # apt-get update && apt-get dist-upgrade       17 
  • 18.     The templates are images of operating systems already prepared for startup. At the first start, they will ask us for a series of parameters depending on their purpose. An example of this is the web https://www.turnkeylinux.org​. In order to use this simple resource we must update the list that by default includes Proxmox. The list can be seen with the command # pveam available If we execute it before updating, only those that are of the system appear. Now we update with the command # pveam update       18 
  • 19.     CREATE CLUSTER To create the cluster we have to create a node that will be the main, in our case the Node 1 is the main.We create the cluster with the command pvem and entering the name that want to give, in our case "clusterasix": With the cluster already created, the next step is to connect to the other two nodes and by means of the command ​#​ ​PVECM add​ and the IP of the Node 1 where the created cluster is located.       19 
  • 20.     When we have added both nodes we can check the status: If we enter the Web interface we can see the three nodes.       20 
  • 21.     CONFIGURATION BONDING To configure the bonding we must first edit the bridge interface, which has the name "Vmbr0". We change the interface used by default, the bonding that we create with name Bond0. The rest of the parameters are left the same Now click on Linux Bond       21 
  • 22.     We have to add all interfaces to form bonding, and choose the bond mode, which will be "balance-RR". final configuration of the bonding.       22 
  • 23.     Here we can see that it is created correctly. The bonding configuration has to be on all three servers.       23 
  • 25.     INSTALL CEPH to allow the movement of virtual machines between nodes, it is necessary to install and configure CEPH. It is a package that we install in all the nodes and in one of them (the main one) we indicate the network in which it works. This is how CEPH works so easy. Its installation in all the nodes of our network is: # pveceph install --version luminous Activate the Ceph-Monitor service #pveceph init --network x.x.x.x./X #pveceph createmon       25 
  • 26.     We start the service in all the nodes From the web interface we start the visualization of all these ceph nodes. Node > Ceph > Monitor > Create       26 
  • 28.     PROBLEMS WITH KVM We have detected a problem when booting our test vm. When starting, it gave us the following error: We have researched. We have seen that the KVM can not be activated in the VMs and in the real machine at the same time. We activate the KVM mode to ensure a clean installation.       28 
  • 29.     INSTALL DRBD Allows remote mirror in real time (equivalent to RAID-1 network), something very difficult to achieve with other systems. DRBD creates a DRBD0 block device accessible from both servers. The primary server is the one that has access on the DRBD0 device: Every time you write something in drbd0 it writes it to the physical partition and those same data are sent by TCP/IP to the secondary server getting both physical partitions synchronized, Exactly like a RAID-1. With this software synchronize the information, if one server falls the other will rise so we do not lose the information First of all we have installed two Debian servers and we have added one hard disk of the same size to each server.       29 
  • 30.     Once we have the two servers, with the Fdisk partition the disks. Now proceed to install DRBD on the two servers       30 
  • 31.     Enter the configuration file DRBD, add the device, the disk where we create the storage, the IP address of the two servers.       31 
  • 32.     Initialize metadata Storage on both servers. Activate DRBD on both servers Configure DRBD1 as Primary server, execute the following command on the server one. We can see in the image that this as primary. But missing in second disc, at first we had enough problems to synchronize them. In server 2 we come out as secondary       32 
  • 33.     The primary disk will save all information and the child will be waiting for the primary to drop. CONFIGURATION NFS Install the NFS server on both servers: Delete NFS Startup scripts: We will create an ext3 file system in it and mount it in the directory /data. This has to be done only on server 1.       33 
  • 34.     We verify that it has been mounted CONFIGURATION HEARTBEAT We will use hearbeat because we had problems when the primary server fell did not jump the secondary with this software we have created a virtual IP as well when the primary falls jumps to the secondary information. Let's look at the HeartBeat configuration. We'll install HeartBeat on both servers. HeartBeat controls the whole issue: it launches and stops NFS on both servers, monitors and activates the secondary server if the primary fails and ensures that the NFS server responds to the virtual IP.       34 
  • 35.     install to HeartBeat /etc/heartbeat/ha.cf: We will create this identical file on both servers: etc/heartbeat/haresources: We will create this identical file on both servers: This file specifies the name of the primary server, the virtual IP, the resource DRBD defined in/etc/drbd.conf (R0), the DRBD device (/dev/drbd0,/Data, ext3), and the Server to monitor (NFS-kernel-server). /etc/heartbeat/authkeys: We will create this identical file on both servers:       35 
  • 36.     Here we define the authentication mechanism (MD5) and the password so that the two heartbeat demons of the servers are authenticate against each other. Only root must have read permissions on/etc/heartbeat/authkeys so we will: We started and checked Once installed the systems go to the PROXMOX and add in storage add the storage NFS.       36 
  • 37.     We put the name, in server put the virtual IP and will be in /data We can see the status of the storage       37 
  • 38.     Now we go to the primary server and we make a /data and we see that we leave the folders. Now proc to install the Zentyal       38 
  • 39.     VERIFICATION We will start by doing the test that if a server falls where we have hosted a virtual machine pass to another server of the cluster. As you can see in the photo the virtual machine is on the server proxmox03       39 
  • 40.     Now proceed to turn off the server proxmox03 and the virtual machine goes to the PROXMOX01.       40 
  • 41.     Now the virtual machine is on the server proxmox02, working properly. We have timed and it takes about 2 minutes to move to the other server.       41 
  • 42.     Now we will proceed to check the Nas servers. If the server Nas01 to raise the server Nas02. we make a ​cat /proc/drbd​ and check that one of the servers is in primary we see that the partition​ /data ​is mounted and the second server will be secondary       42 
  • 43.     We turn off the main server and the secondary server will go to primary without affecting the storage. we see that the partition / data has been mounted on the server Now we turn on the server that we have turned off and it will become the primary one as at the beginning.       43 
  • 44.     We have problems because it does not start the services. we have created a start script, so you can start the services automatically.       44 
  • 45.     Security access  script that blocks access to the administration of PROXMOX.            45 
  • 46.     Now we try to enter with an IP that is not .77 or .78 and this happens       46 
  • 47.     MIGRATION VM TO OTHER NODE With Proxmox we can migrate one machine from one server to another. With this option we can migrate a hot machine without having to turn it off when we want to perform a maintenance task on the node on which the instance is running. 1. Node > VM/CT > right click > MIgrate       47 
  • 48.     *another option is to click on migrate in control panel of VM 2. ​the panel to migrate is so simple that it only has 1 parameter, which is the destination node. It informs us of your status just below the option 3. It informs us about the status of the process       48 
  • 49.     4. Here we see the vm moved as it runs without problems       49 
  • 50.     MANUAL INSTALLATION FreeNAS 1. We start downloading the ISO FreeNAS and boot and start installing the program. 2. It alerts us that ada0 will be completely erased and that we won't be able to share data. Click Yes       50 
  • 51.     3. Enter the root password. 4. Now we choose to be boot via BIOS 5. Start installing the FreeNAS 6.It shows that it has been installed correctly.       51 
  • 52.     7.Once installed restart to be able to start the program and to use it. 8.We will leave a menu to be able to configure.       52 
  • 53.     9.We enter with Web interface through IP 10.Check we can get in.       53 
  • 54.     CREATE NFS We will create storage for backups. 1. For the creation of our NFS server we use the FreeNas installed in our network. In the Services section we activate the NFS module. 2. In the section ​Sharing​> ​Unix NFS shares​ click on​ add Unix NFS​.       54 
  • 55.     3. Fill in the fields. The path will be the same route indicated in Storage 4. We verify that it has been created correctly       55 
  • 56.     5. Now in ​proxmox ​we will add this new NFS unit. We select our​ Data Center​, and enter the ​Storage section. ​Open the add menu and select the ​NFS option​. Since it is the type of our storage. In this screen we fill in all the options. The fields to be filled are: ID​: It will be what we call this storage SERVER​: IP address of the NFS server EXPORT​: directory path on the server CONTENT​: We select what this storage will contain (backups, images, containers, ...) NODOS​: Which nodes will have access to this NFS server ACTIVATE​: To allow the use MAX BACKS.​ How many backups will you keep from the vm / contenders. We recommend 5       56 
  • 57.     6. Now we have our new storage available       57 
  • 58.     scheduled backup  1.First we add a new Storage since the backup always must do in an external device the rest       58 
  • 59.     2. On this screen we configure all the backup. In our case we decided to create a backup every day, at 1.30 of the MV Zentyal. Your destination will be backups (an NFS mounted on FreeNas). An email will be sent each time the task is launched. Whether the backup is successful or not. The mode will be instantaneous. So there will be no errors or nothing will happen if the MV is starting and running 3. When we launch the backup task we get this screen       59 
  • 60.     4. This is the email that comes to us when the backup is done. This is the email that comes to us when making the backup. Here we see 2 emails, one successful backup and the other failed       60 
  • 62.     Backup for command line  We can make backups through the command line with the ​vzdump ​command. with the command ​qmlist ​see the mv that there is right now in this node options command: --storage ​Specifies the storage where the backup will be saved --mode ​(stop | suspend | snapshot): stop With stop the machine stops during the backup process suspend With openvz virtual machines, rsync is used to copy the virtual machine to a temporary directory, then the machine is suspended, a second rsync copies the files and the operation of the machine is resumed. With machines qemu and kvm the operation is similar to stop but suspending / resuming the machines instead of stopping and starting it. snapshot It makes use of LVM to carry it out. There is no need to stop the machine but additional space is needed for the creation of the LVM. --compress (0 | 1 | gzip | lzo) ​Compress the backup, by default it is lzo and 1 is the same as lzo --dumpdir​ directory Sets the directory where the created backup will be saved       62 
  • 63.     --maxfiles ​1-N It establishes the maximum number of backups that can be stored on the same machine, by default it is 1. So if, for example, we had put --maxfiles 2 and there were already two backups in the directory, it would delete the oldest one making them want two backups. If we go to the web browser we see that the task was launched       63 
  • 64.           if we go to the server of the NAS we see how in ​/mnt/vms​ are the backup copies       64 
  • 65.     Restoration Backup for command line    you can restore backup copies by command line and by graphical interface  Through the command line we have two applications, vzrestore for vm openVz.  vzrestore [--storage storage_name] file vmid       65 
  • 66.     CONCLUSIONS This project has been useful for us to learn about high availability in virtualization. we can say that the installation of proxmox and configuration of the high availability cluster has been easy to perform. From here the thing has changed, when synchronizing the two NAS servers we have had enough problems because we can not synchronize it, so that when the main one falls, the secondary will be the main one and not lose the information. we have spent a lot of time researching and in the end we have managed to create a ​virtual ip,​ we have done it with ​HeartBeat​, what it does is monitor and activate the secondary server if the first it fails and the ​DRBD ​server responds in the virtual ip. The biggest problem we have had is that the VM does not get internet since it does not reach the gateway​. We have detected problems with the mv if we put the local storage and the mv moves. The file is lost and we are able to recreate that file. Server virtualization is simple. But you have to have certain security measures. As an example, place backups externally. Monitor the network and its IP configuration. The Proxmox system is simple at the VM level. But it is complicated at the network level in high availability. We found this project interesting. We have moved quite well with the Proxmox configuration. Not so with DRBD and its configuration. DRBD and heartbeat services have problems at the time of initiation. We have resolved this as we have explained in this document.       66 
  • 67.     BIBLIOGRAPHY https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster https://pve.proxmox.com/wiki/High_Availability_Cluster_4.x https://es.wikipedia.org/wiki/Alta_disponibilidad https://administradoresit.wordpress.com/2015/02/19/instalacion-proxmox/ https://enavas.blogspot.com.es/2016/12/proxmox-plantillas-y-contenedores.html https://www.youtube.com/watch?v=-8SwpgaxFuk DRBD: https://docs.linbit.com/ https://www.theurbanpenguin.com/drbd-pacemaker-ha-cluster-ubuntu-16-04 http://www.estrellateyarde.org/virtualizacion/mirror-remoto-con-drbd https://wiki.pandorafms.com/index.php?title=Pandora:Documentation_es:DRBD https://www.linux-party.com/29-internet/9044-configurar-un-servidor-raid1-en-red-con-drbd-en-linux-1-d e-2 https://sigterm.sh/2014/02/01/highly-available-nfs-cluster-on-debian-wheezy/ https://www.sebastien-han.fr/blog/2012/04/30/failover-active-passive-on-nfs-using-pacemaker-and-drbd http://www.linux-admins.net/2014/04/deploying-highly-available-nfs-server.html FIREWALL http://nihilanthlnxc.cubava.cu/2015/09/04/cortafuegos-de-proxmox-ve/ BACKUPS https://uninformaticoenisbilya.blogspot.com.es/2015/09/creacion-y-restauracion-de-backups-en.html       67