This document describes setting up a PXE server to remotely install and configure ESXi hosts. The PXE server will:
1. Use TFTP to serve ESXi installation files and PXE boot files to hosts.
2. Use DHCP to assign IP addresses to hosts and point them to the PXE boot file.
3. Use a web server to host a kickstart file that automates the ESXi installation.
The document provides details on configuring the TFTP, DHCP and web servers on a Linux system to support this PXE installation process. It also describes testing the setup by booting a host to trigger the remote ESXi installation.
Open Source Backup Conference 2014: Rear, by Ralf DannertNETWAYS
ReaR(Relax and Recover) is delivered as part of the SUSE Linux High Availability Extension.
We show -by way of example- how corporations integrate ReaR during Preparation, Testing and Recovery as buildingblock of their disaster recovery strategy.In the technical part we will highlight the AutoYaST/YaST integration with rear-suse.
We will also investigate some of the adaptations, that had to be done to make ReaR work with upcoming SLES12, that will include systemd and grub2 to be able to automatically recover btrfs subvolumes.
This is a level 200 - 300 presentation.
It assumes:
Good understanding of vCenter 4, ESX 4, ESXi 4.
Preferably hands-on
We will only cover the delta between 4.1 and 4.0
Overview understanding of related products like VUM, Data Recovery, SRM, View, Nexus, Chargeback, CapacityIQ, vShieldZones, etc
Good understanding of related storage, server, network technology
Target audience
VMware Specialist: SE + Delivery from partners
Open Source Backup Conference 2014: Rear, by Ralf DannertNETWAYS
ReaR(Relax and Recover) is delivered as part of the SUSE Linux High Availability Extension.
We show -by way of example- how corporations integrate ReaR during Preparation, Testing and Recovery as buildingblock of their disaster recovery strategy.In the technical part we will highlight the AutoYaST/YaST integration with rear-suse.
We will also investigate some of the adaptations, that had to be done to make ReaR work with upcoming SLES12, that will include systemd and grub2 to be able to automatically recover btrfs subvolumes.
This is a level 200 - 300 presentation.
It assumes:
Good understanding of vCenter 4, ESX 4, ESXi 4.
Preferably hands-on
We will only cover the delta between 4.1 and 4.0
Overview understanding of related products like VUM, Data Recovery, SRM, View, Nexus, Chargeback, CapacityIQ, vShieldZones, etc
Good understanding of related storage, server, network technology
Target audience
VMware Specialist: SE + Delivery from partners
Deploying datacenters with Puppet - PuppetCamp Europe 2010Puppet
Rafael Brito at PuppetCamp Europe 2010
"Deploying datacenters with Puppet."
Follow along with the Video at: https://www.youtube.com/watch?v=3DaWrKQ82j4
Puppet Camp Europe 2010: Ghent, Belgium
May 27-28, 2010
Automatic systems installations and change management wit FAI - Talk for Netw...Henning Sprang
How long does it take you, to recover an arbitrary server, or duplicate an arbitrary running configuration to a new system? Especially in the latter case without a full-backup, which would contain a wrong IP Address, Hostname and other things and would therefore eventually break some things - and are storage-exhaustive.
Get into FAI - Fully Automatic Installation.
FAI http://www.informatik.uni-koeln.de/fai/) is a framework for completely automated installations - via LAN, CD or USB stick, as well as configuration management for running systems. The concept "Plan your installation, and FAi installs your plan" supports, but also requires building a well planned and documented infrastructure. Configuration properties can be defined into the smallest possible Detail, and then be arbitrarily combined - a great advantage in environments with many different system types, which at the same time share one or multiple common bases and settings. FAI makes it possible to install and change many different systems at the same time.
In addition to all these things, with the grml-live software, FAI can even be used to build live cd's/usb sticks. This talk will give an overview of the functionality and possibilities of FAI, including a comparison with the also renowned software for similar, but not completely the same tasks, Puppet - which can even be integrated into FAI.
This HowTo is about how to create and manage Xen Virtual Machines on Debian 7 aka Wheezy with openQRM 5.1. This is the first howto which requires 2 systems and it shows how to integrate additional, existing, local-installed server into openQRM by the example of adding a Xen Host into an existing openQRM environment.
1,2편에서 다룬 chroot와 pivot_root를 통해서 root filesystem을 isolation하였습니다. 마운트 네임스페이스는 파일시스템 마운트를 isolation 하는 것으로 이미 pivot_root에서도 사용하였지만, mount 처리를 격리함으로써 컨테이너 내부의 파일시스템 구조를 독립적으로 유지합니다. 실제 도커 컨테이너의 이미지 tarball을 이용하여 pivot_root와 mount namespace까지 적용하여 실제 도커 방식과 유사하게 컨테이너를 기동하여 봅니다. 그리고, 컨테이너 이미지 용량/중복을 해결하기 위한 overlayFS 에 대하여 다룹니다.
https://netpple.github.io
Have you ever dreamed to have an "MKSYSB like" solution to quickly backup/restore your Linux on Power ?
If the answer is YES, the opensource solution named Relax and Recover (ReaR) may be for you.
Come to this session to learn more about how to implement and the capabilities of this solution through presentation and live demonstration.
1. PXE Server Configuration and Documentation
This server will be used to PXE boot workstations and assign them IP addresses (utilizing Wake-
On-Lan capable NICs on workstations), point to ESXi installation media and scripts, and to
automatically install and run ESXi 5.1 (via a specially configured kickstart file). These ESXi
hosts will then in turn host Virtual Machines for the lab. These VMs are administered and
configured using VMware vSphere and vCenter server appliance, and NETLAB+ is used as a
front-end for remote access for students and faculty to use or configure the VM pods (“pod” is a
pre-configured group of VMs configured for NETLAB) and environments.
The server must provide:
TFTP Server (to serve the ESXi installation files/media and the other necessary
configuration files to the target hosts to start boot/installation)
DHCP Server (provide target ESXi hosts with IP addresses, points to PXE boot file)
Web Server (hosts the custom Kickstart script which automates the ESXi install, and
possibly other files if using gpxelinux.0)
Access to the ESXi 5.1 files from .iso or other source, available to allow copies of the
files
SYSLINUX (using either pxelinux.0 or gpxelinux.0 configuration files to boot)
Kickstart configuration script (automates install among other settings)
If you use PXELINUX with the pxelinux.0 file, the pxelinux.0 binary file itself, the
configuration file boot.cfg, the kernel, and other files are transferred by TFTP. When using
gPXE, only the gpxelinux.0 binary file and configuration file are transferred by TFTP. With
gPXE, you can use a Web server to transfer the kernel and other files required to install. Because
of this, gPXE can provide better performance/secuirty in some cases, although both work
effectively in testing and pxelinux.0 is simpler to implement.
** This has only been tested using a Linux environment using SYSLINUX to provide the PXE
boot functions. There are Microsoft Windows solutions available, although those solutions are
more complex and bloated, heavily relying third- party applications. **
Test environment consisted of:
SusSE 11 Server SP3 VM with the following services/packages/configurations;
20GB HD
2GB RAM
Apache2, TFTP, DHCP, HTTP , SYSLINUX (using pxelinux.0)
2. TFTP Setup and Configuration
In SuSE Linux by the default this is located off the root at /tftpboot.
This TFTP directory structure will contain all the files needed for PXE booting (except for the
DHCP and kickstart script
This includes:
- The contents of the ESXi .iso (installation files)
- The SYSLINUX files
- The bootloader and its configuration file (boot.cfg)
#Depending on system you may have to create a pointer to the tftp directory, here is an example;
A copy of the pxelinux.0 file needs to be copied into the tftp directory so it is located at;
/tftpboot/pxelinux.0
pxelinux.0 is used by SYSLINUX to boot the hosts. The following directory needs to be created;
/tftpboot/pxelinux.cfg
This directory contains the file that pxelinux.0 uses to boot. This file needs to be created and
named default, and is located at;
/tftpboot/pxelinux.cfg/default
3. This PXE script must point to two vital files: The boot loader configuration file boot.cfg,
which specifies the kernel, kernel options, boot parameters, and the bootloader itself – which in
our case is mboot.c32
boot.cfg -contains additional bootloader settings, one of which is vital: location of kickstart
script – see example from test lab below;
The generic template for this file is isolinux.cfg –which is located in ESXi installation media
4. DHCP Setup and Configuration
The DHCP configuration file is called dhcpd.conf and is located at;
/etc/sysconfig/dhcpd.conf
Specifically, it assigns an IP address and points to the PXELINUX file used (pxelinux.0 or
gpxelinux.0) which is in the TFTP directories. The bold data below is the contents the
dhcpd.conf file used in the test environment.
option subnet-mask 255.255.255.0;
option broadcast-address 169.254.0.255;
option domain-name "pxeserver.local";
option domain-name-servers 169.254.1.1;
option routers 169.254.0.1;
max-lease-time 604800;
authoritative ;
filename "pxelinux.0";
ddns-update-style none;
default-lease-time 86400;
ddns-update-style ad-hoc;
allow booting;
allow bootp;
# the following defines the IP pool that DHCP will provide for the hosts
subnet 169.254.0.0 netmask 255.255.255.0 {
range 169.254.0.100 169.254.0.150;
}
class pxeclients {
match if substring(option vendor-class-indentifier, 0, 9) = "PXEClient";
}
# The following entry will ensure the same IP is given every time it boots. This entry needs
to be repeated in dhcpd.conf (this file) for each host machine. (MAC addresses are needed)
host ESXi_1 {
hardware ethernet BA:DF:00:DF:C0:FF;
fixed-address 169.254.0.101;
option host-name ESXi_1;
}
5. Hosting Kickstartfile using HTTP/Apache2
The kickstart file mentioned above must be hosted using HTTP on the default web server. It
needs to be located on the web server root. For SuSE Linux this location is;
/srv/www/htdocs/
/srv/www/htdocs/kickstart.ks.cfg
For this environment the kickstart files is named ks, although it can be named anything.
# Remember, the kickstart file dictates how the machine will behave during/after installing
ESXi, such as rebooting, license agreements, and diskless hosts. The appropriate syntax must be
added/adjusted to conform to specific network needs.
6. Test.
After completing the above steps, making sure all files are in the correct directories and contain
the correct data the system can now be tested. Begin by restarting DHCP, TFTP, and HTTP
services.
Launch a host ( blank VM in VMware workstation was used in test environment) that is set to
boot form NIC and is on correct network. If all is correct, ESXi installation will begin.