Trabajo de fin de Ciclo Formativo Grado Superior en Administración de Sistemas en red (ASIR/ASIX).
El trabajo consiste en un proyecto de virtualizacion de servidores para dar una alta disponibilidad (HA) mediante el sistema Proxmox. El servicio a dar en cuestión finalmente fue de un servidor proxy y web, por falta de tiempo y problemas con la configuración de Zentyal, fue imposible su instalación.
2.
INDEX
INTRODUCTION 3
TECHNOLOGY 4
WHAT IS VIRTUALIZATION AND WHAT VIRTUALIZATION BENEFITS US. 5
VIRTUALIZATION SOLUTIONS BASED ON HYPERVISOR 7
OBJECTIVES 8
STRUCTURE AND NETWORK DIAGRAM 9
MANUAL INSTALLATION PROXMOX 11
CREATE CLUSTER 19
CONFIGURATION BONDING 21
PROBLEMS WITH KVM 28
INSTALL DRBD 28
CONFIGURATION NFS 33
CONFIGURATION HEARTBEAT 34
VERIFICATION 39
Security access 45
MIGRATION VM TO OTHER NODE 47
MANUAL INSTALLATION FreeNAS 50
CREATE NFS 54
scheduled backup 58
Backup for command line 62
Restoration Backup for command line 65
CONCLUSIONS 66
BIBLIOGRAPHY 67
2
3.
INTRODUCTION
We are a medical center that we are growing every day, we have several specialties and we
also have 100 employees and 50 physicians, but we are currently expanding the template. We
can also say that we have a large number of patients. But lately our patient is complaining
because our system has fallen, and we haven't been able to work normally. For this reason, we
have come to the conclusion of mounting a high availability cluster to be able to work more
tranquility and so we will continue to offer a better service to our patient and we will be able to
continue growing.
In this project we do the installation and configuration of a high availability cluster with
PROXMOX, we will use three servers in which the Zentyal is installed, where we can configure
the Active Directory or configure the DNS or DHCP. We will have the three servers in operation
if one of the servers falls, with the high availability would not have to happen anything, would
have to continue running with the other two servers. In our network we will have a
"management" PC which only he will have access to administer the Proxmox. We will have two
NAS servers where the data will be synchronized on both disks. Finally we will make a backup
of the machines Proxmox, in theory this copy would have to be in a different place but we can
not do this.
3
4.
TECHNOLOGY
Proxmox Virtual Environment: is an open source
server virtualization environment. It is in
Debian-based GNU/Linux distributions with a
modified version of the RHEL Kernel and allows the
deployment and management of virtual machines and containers.
The main features of Proxmox VE are:
● is open source
● allows live migration
● has a high network bridge enabling
● OS construction templates
● scheduled backups
● Line tools Commands
Bonding: is a method of combination two or more network interfaces into a single interface.
This increases network processing, bandwidth, and redundancy. If an interface is disconnected
or inactive, the other interface keeps network traffic alive. This linkage is used in situations
where you need redundancy, fault tolerance, or balance in the network load.
High-availability cluster: A high-availability cluster is a set of two or more machines that are
characterized by maintaining a series of shared services and constantly monitoring each other.
If a hardware failure occurs on any of the cluster machines, the high-availability software is
capable of automatically booting services into any of the other cluster-forming nodes.
KVM: A complete virtualization solution that allows us to run multiple virtual machines
(Windows, Linux, Unix 32 or 64 bits), in which each virtual machine will have its own virtual
hardware
4
5.
OpenVZ: Container-based virtualization for LINUX. Proxmox allows us to execute multiple
"instances" of isolated operating systems on a single physical server, with the advantage that
each MV uses the Hardware resources of the host server.
DRBD: (Distributed Replicated Block Device) Allows remote mirror in real time (equivalent to
RAID-1 network), something very difficult to achieve with other systems.
NFS: The Network File System (NFS) is a client/server application that allows a computer user
to view and optionally store and update files on a remote computer as if they were on the
user's own computer
HeartBeat: Application to provide high availability to two servers, which will share an IP
address. One will be the asset and the other the passive.
WHAT IS VIRTUALIZATION AND WHAT VIRTUALIZATION BENEFITS US.
Virtualization is the ability to allow several logical machines in a single physical. This through
specialized softwares or in more professional environments like the one that this project deals
with.
Each virtual machine is isolated from the rest, it can create a farm of independent servers in a
single physical machine, with the decrease of economic and energetic cost that this supposes.
This technology is based on hypervisor.
The hypervisor This technology is between the HW of the real server and the virtual machines.
The hypervisor assigns the amount of access that each virtual machine has to the processor,
memory, hard disk and other resources.
The son benefits:
5
6.
Highest utilization rates
Before virtualization, server utilization rates and storage in the company's data centers were
less than 50%. Through virtualization, workloads can be encapsulated and transferred to idle
or unused systems.
Resource Consolidation
Virtualization allows the consolidation of multiple resources. Beyond consolidating storage,
virtualization offers an opportunity to consolidate the architecture of systems, application
infrastructure, data and database, interfaces, networks, desktops, and even business
processes, resulting in cost savings and greater efficiency .
Less energy use
The electricity required to run in business class data centers is no longer available in unlimited
supplies, and the cost is spiraling upward. Using virtualization, you can reduce your total
energy consumption and save money in a meaningful way.
6
7.
Retrench of space
The dimensions of the servers is a serious problem in most companies, with virtualization from
which you can save physical machines.
Disaster recovery / business continuity
Virtualization increases the availability of the services installed in it. And all virtualization tools
new solutions solutions for disaster recovery.
Ease of maintenance and supervision of critical systems
All the virtualization tools in Hypervisor incorporate different tools for its correct configuration
and maintenance. Some of these pass through tools such as monitoring, notification by email,
notice of extreme use of some HW component. Being able to have a summary of our structure
in an easy and fast way, since these tools allow the monitoring and configuration of several
nodes.
WHAT IS KVM?
It is a complete virtualization solution for Linux on x86 hardware with virtualization extensions
(Intel VT or AMD-V). With KVM, we can run multiple virtual machines without modifying the
Linux or Windows images that are running. KVM is part of RedHat Emerging Technologies
VIRTUALIZATION SOLUTIONS BASED ON HYPERVISOR
VMWARE ESXi HYPER-V XENSERVER PROXMOX VE
SO SUPPORTED WINDOWS /
LINUX
WINDOWS /
LINUX (LIMITED
SUPPORT)
WINDOWS /
LINUX
WINDOWS /
LINUX
LICENSE PAYMENT PAYMENT
(INCL.
WSERVER)
PAYMENT FREE
CONTAINERS NO NO NO YES - OPENVZ
7
8.
MAXIMUM
HARDWARE
SUPPORTED
128 CPU / 4TB
RAM / 62TB
ARCHIVO
VMDK
240 CPU / 12 TB
RAM / 128 AD.
RED / 64 TB
DISCO VIRTUAL
32 CPU / 1.5 TB
RAM / 2TB HDD
/ 7 T. RED
160 CPU / 2TB
RAM / 4000 T.
RED
SAN STORAGE YES YES YES YES
MIGRATION OF VM
IN HOT
YES YES (LIMITED) YES YES
SNAPSHOTS YES YES YES YES
We have chosen Proxmox because it is an open source software. And after reading several
articles and forums, we have been singing for this software, and so in the company we will
have good software to have high availability.
OBJECTIVES
Our main goal is to create a secure, high-availability, fully open-source virtualized system. To
do this we use two servers with PROXMOX with dual network card and two Nas servers that
are synchronized between them. With this system we will achieve high availability in our
company with the purpose that we can work properly and we do not have problems in which a
server falls down and spend hours without working.
8
10.
IP MACHINE REASON
192.168.124.200 PROXMOX 01 SERVER PROXMOX
BELONGING TO THE NODE
192.168.124.205 PROXMOX 02 SERVER PROXMOX
BELONGING TO THE NODE
192.168.124.215 PROXMOX 03 SERVER PROXMOX
BELONGING TO THE NODE
192.168.124.230 DRBD 01 SERVER WHERE WE
ACCOMMEND THE VIRTUAL
MACHINES AND BACKUPS
OF OUR SYSTEM
192.168.124.231 DRBD 02 SERVER WHERE WE
ACCOMMEND THE VIRTUAL
MACHINES AND BACKUPS
OF OUR SYSTEM
192.168.124.216 Ubuntu Server PROXY SERVER AND WEB
SERVER
192.168.124.212 SRV BACKUPS - NFS SERVER WHERE BACKUPS
OF OUR SYSTEM
10
11.
MANUAL INSTALLATION PROXMOX
Once we have downloaded the ISO and boteado the ISO. We start the installation, we will find
a menu with different options.
● Install Proxmox
● Install Proxmox, but in debug mode
● Retrieve the boot sector of the disk
● Check the integrity of the computer's memory.
In our house we choose the first option: Install Proxmox VE
11
12.
The wizard that will join us during the entire installation process is started.
In this case, it shows us the license agreement, which we must read and accept before
proceeding with the process.
Now we have to choose the hard drive where you will install Proxmox VE
12
13.
Here we have to make the time zone settings.
After the time zone, we should write the password so that later we can login as root user.
With this we will have finished the setup of the installation. From here the wizard configures the
disk and installs the system
13
14.
Once installed, you will be ready to restart the system and start using it.
When we start the system the first thing we will see is a blue screen, with different options.
14
15.
Here we have a text mode screen, which is requested by the user and the password, we can
login from a Web interface.
Click to access
15
16.
We start with Web interface with the user and password.
Now we have the management interface of Proxmox VE
16
17.
UPDATING /etc/hosts
We entered the/etc/hosts file of the three servers and added.
UPDATING SYSTEM AND TEMPLATES
Although we have downloaded the iso recently it is always welcome to update the system with the
command:
# apt-get update && apt-get dist-upgrade
17
18.
The templates are images of operating systems already prepared for startup. At the first start, they will
ask us for a series of parameters depending on their purpose. An example of this is the web
https://www.turnkeylinux.org. In order to use this simple resource we must update the list that by default
includes Proxmox.
The list can be seen with the command
# pveam available
If we execute it before updating, only those that are of the system appear.
Now we update with the command
# pveam update
18
19.
CREATE CLUSTER
To create the cluster we have to create a node that will be the main, in our case the Node 1 is
the main.We create the cluster with the command pvem and entering the name that want to
give, in our case "clusterasix":
With the cluster already created, the next step is to connect to the other two nodes and by
means of the command # PVECM add and the IP of the Node 1 where the created cluster is
located.
19
20.
When we have added both nodes we can check the status:
If we enter the Web interface we can see the three nodes.
20
21.
CONFIGURATION BONDING
To configure the bonding we must first edit the bridge interface, which has the name "Vmbr0".
We change the interface used by default, the bonding that we create with name Bond0. The
rest of the parameters are left the same
Now click on Linux Bond
21
22.
We have to add all interfaces to form bonding, and choose the bond mode, which will be
"balance-RR".
final configuration of the bonding.
22
23.
Here we can see that it is created correctly. The bonding configuration has to be on all three
servers.
23
25.
INSTALL CEPH
to allow the movement of virtual machines between nodes, it is necessary to install and
configure CEPH.
It is a package that we install in all the nodes and in one of them (the main one) we indicate the
network in which it works. This is how CEPH works so easy.
Its installation in all the nodes of our network is:
# pveceph install --version luminous
Activate the Ceph-Monitor service
#pveceph init --network x.x.x.x./X
#pveceph createmon
25
26.
We start the service in all the nodes
From the web interface we start the visualization of all these ceph nodes.
Node > Ceph > Monitor > Create
26
28.
PROBLEMS WITH KVM
We have detected a problem when booting our test vm. When starting, it gave us the following
error:
We have researched. We have seen that the KVM can not be activated in the VMs and in the
real machine at the same time. We activate the KVM mode to ensure a clean installation.
28
29.
INSTALL DRBD
Allows remote mirror in real time (equivalent to RAID-1 network), something very difficult to
achieve with other systems. DRBD creates a DRBD0 block device accessible from both
servers. The primary server is the one that has access on the DRBD0 device: Every time you
write something in drbd0 it writes it to the physical partition and those same data are sent by
TCP/IP to the secondary server getting both physical partitions synchronized, Exactly like a
RAID-1.
With this software synchronize the information, if one server falls the other will rise so we do
not lose the information
First of all we have installed two Debian servers and we have added one hard disk of the same
size to each server.
29
30.
Once we have the two servers, with the Fdisk partition the disks.
Now proceed to install DRBD on the two servers
30
31.
Enter the configuration file DRBD, add the device, the disk where we create the storage, the IP
address of the two servers.
31
32.
Initialize metadata Storage on both servers.
Activate DRBD on both servers
Configure DRBD1 as Primary server, execute the following command on the server one.
We can see in the image that this as primary. But missing in second disc, at first we had
enough problems to synchronize them.
In server 2 we come out as secondary
32
33.
The primary disk will save all information and the child will be waiting for the primary to drop.
CONFIGURATION NFS
Install the NFS server on both servers:
Delete NFS Startup scripts:
We will create an ext3 file system in it and mount it in the directory /data. This has to be done
only on server 1.
33
34.
We verify that it has been mounted
CONFIGURATION HEARTBEAT
We will use hearbeat because we had problems when the primary server fell did not jump the
secondary with this software we have created a virtual IP as well when the primary falls jumps
to the secondary information.
Let's look at the HeartBeat configuration. We'll install HeartBeat on both servers. HeartBeat
controls the whole issue: it launches and stops NFS on both servers, monitors and activates
the secondary server if the primary fails and ensures that the NFS server responds to the
virtual IP.
34
35.
install to HeartBeat
/etc/heartbeat/ha.cf: We will create this identical file on both servers:
etc/heartbeat/haresources: We will create this identical file on both servers:
This file specifies the name of the primary server, the virtual IP, the resource DRBD defined
in/etc/drbd.conf (R0), the DRBD device (/dev/drbd0,/Data, ext3), and the Server to monitor
(NFS-kernel-server).
/etc/heartbeat/authkeys: We will create this identical file on both servers:
35
36.
Here we define the authentication mechanism (MD5) and the password so that the two
heartbeat demons of the servers are authenticate against each other. Only root must have
read permissions on/etc/heartbeat/authkeys so we will:
We started and checked
Once installed the systems go to the PROXMOX and add in storage add the storage NFS.
36
37.
We put the name, in server put the virtual IP and will be in /data
We can see the status of the storage
37
38.
Now we go to the primary server and we make a /data and we see that we leave the folders.
Now proc to install the Zentyal
38
39.
VERIFICATION
We will start by doing the test that if a server falls where we have hosted a virtual machine
pass to another server of the cluster.
As you can see in the photo the virtual machine is on the server proxmox03
39
40.
Now proceed to turn off the server proxmox03 and the virtual machine goes to the
PROXMOX01.
40
41.
Now the virtual machine is on the server proxmox02, working properly. We have timed and it
takes about 2 minutes to move to the other server.
41
42.
Now we will proceed to check the Nas servers. If the server Nas01 to raise the server Nas02.
we make a cat /proc/drbd and check that one of the servers is in primary
we see that the partition /data is mounted
and the second server will be secondary
42
43.
We turn off the main server and the secondary server will go to primary without affecting the
storage.
we see that the partition / data has been mounted on the server
Now we turn on the server that we have turned off and it will become the primary one as at the
beginning.
43
44.
We have problems because it does not start the services. we have created a start script, so
you can start the services automatically.
44
46.
Now we try to enter with an IP that is not .77 or .78 and this happens
46
47.
MIGRATION VM TO OTHER NODE
With Proxmox we can migrate one machine from one server to another. With this option we
can migrate a hot machine without having to turn it off when we want to perform a maintenance
task on the node on which the instance is running.
1. Node > VM/CT > right click > MIgrate
47
48.
*another option is to click on migrate in control panel of VM
2. the panel to migrate is so simple that it only has 1 parameter, which is the destination node.
It informs us of your status just below the option
3. It informs us about the status of the process
48
49.
4. Here we see the vm moved as it runs without problems
49
50.
MANUAL INSTALLATION FreeNAS
1. We start downloading the ISO FreeNAS and boot and start installing the program.
2. It alerts us that ada0 will be completely erased and that we won't be able to share data.
Click Yes
50
51.
3. Enter the root password.
4. Now we choose to be boot via BIOS
5. Start installing the FreeNAS
6.It shows that it has been installed correctly.
51
52.
7.Once installed restart to be able to start the program and to use it.
8.We will leave a menu to be able to configure.
52
53.
9.We enter with Web interface through IP
10.Check we can get in.
53
54.
CREATE NFS
We will create storage for backups.
1. For the creation of our NFS server we use the FreeNas installed in our network.
In the Services section we activate the NFS module.
2. In the section Sharing> Unix NFS shares click on add Unix NFS.
54
55.
3. Fill in the fields. The path will be the same route indicated in Storage
4. We verify that it has been created correctly
55
56.
5. Now in proxmox we will add this new NFS unit.
We select our Data Center, and enter the Storage section. Open the add menu and select
the NFS option. Since it is the type of our storage.
In this screen we fill in all the options. The fields to be filled are:
ID: It will be what we call this storage
SERVER: IP address of the NFS server
EXPORT: directory path on the server
CONTENT: We select what this storage will contain (backups, images, containers, ...)
NODOS: Which nodes will have access to this NFS server
ACTIVATE: To allow the use
MAX BACKS. How many backups will you keep from the vm / contenders. We recommend 5
56
59.
2. On this screen we configure all the backup. In our case we decided to create a backup every
day, at 1.30 of the MV Zentyal. Your destination will be backups (an NFS mounted on
FreeNas).
An email will be sent each time the task is launched. Whether the backup is successful or not.
The mode will be instantaneous. So there will be no errors or nothing will happen if the MV is
starting and running
3. When we launch the backup task we get this screen
59
60.
4. This is the email that comes to us when the backup is done. This is the email that comes to
us when making the backup. Here we see 2 emails, one successful backup and the other
failed
60
62.
Backup for command line
We can make backups through the command line with the vzdump command. with the
command qmlist see the mv that there is right now in this node
options command:
--storage Specifies the storage where the backup will be saved
--mode (stop | suspend | snapshot):
stop
With stop the machine stops during the backup process
suspend
With openvz virtual machines, rsync is used to copy the virtual machine to a temporary
directory, then the machine is suspended, a second rsync copies the files and the operation of
the machine is resumed.
With machines qemu and kvm the operation is similar to stop but suspending / resuming the
machines instead of stopping and starting it.
snapshot
It makes use of LVM to carry it out. There is no need to stop the machine but additional space is
needed for the creation of the LVM.
--compress (0 | 1 | gzip | lzo) Compress the backup, by default it is lzo and 1 is the same as lzo
--dumpdir directory Sets the directory where the created backup will be saved
62
63.
--maxfiles 1-N It establishes the maximum number of backups that can be stored on the same
machine, by default it is 1. So if, for example, we had put --maxfiles 2 and there were already two
backups in the directory, it would delete the oldest one making them want two backups.
If we go to the web browser we see that the task was launched
63
64.
if we go to the server of the NAS we see how in /mnt/vms are the backup copies
64
65.
Restoration Backup for command line
you can restore backup copies by command line and by graphical interface
Through the command line we have two applications, vzrestore for vm openVz.
vzrestore [--storage storage_name] file vmid
65
66.
CONCLUSIONS
This project has been useful for us to learn about high availability in virtualization.
we can say that the installation of proxmox and configuration of the high availability cluster has been
easy to perform. From here the thing has changed, when synchronizing the two NAS servers we have
had enough problems because we can not synchronize it, so that when the main one falls, the
secondary will be the main one and not lose the information. we have spent a lot of time researching
and in the end we have managed to create a virtual ip, we have done it with HeartBeat, what it does is
monitor and activate the secondary server if the
first it fails and the DRBD server responds in the virtual ip.
The biggest problem we have had is that the VM does not get internet since it does not reach the
gateway.
We have detected problems with the mv if we put the local storage and the mv moves. The file is lost
and we are able to recreate that file.
Server virtualization is simple. But you have to have certain security measures. As an example, place
backups externally. Monitor the network and its IP configuration. The Proxmox system is simple at the
VM level. But it is complicated at the network level in high availability.
We found this project interesting. We have moved quite well with the Proxmox configuration. Not so
with DRBD and its configuration. DRBD and heartbeat services have problems at the time of initiation.
We have resolved this as we have explained in this document.
66