SlideShare a Scribd company logo
1 of 7
PXE Server Configuration and Documentation
This server will be used to PXE boot workstations and assign them IP addresses (utilizing Wake-
On-Lan capable NICs on workstations), point to ESXi installation media and scripts, and to
automatically install and run ESXi 5.1 (via a specially configured kickstart file). These ESXi
hosts will then in turn host Virtual Machines for the lab. These VMs are administered and
configured using VMware vSphere and vCenter server appliance, and NETLAB+ is used as a
front-end for remote access for students and faculty to use or configure the VM pods (“pod” is a
pre-configured group of VMs configured for NETLAB) and environments.
The server must provide:
 TFTP Server (to serve the ESXi installation files/media and the other necessary
configuration files to the target hosts to start boot/installation)
 DHCP Server (provide target ESXi hosts with IP addresses, points to PXE boot file)
 Web Server (hosts the custom Kickstart script which automates the ESXi install, and
possibly other files if using gpxelinux.0)
 Access to the ESXi 5.1 files from .iso or other source, available to allow copies of the
files
 SYSLINUX (using either pxelinux.0 or gpxelinux.0 configuration files to boot)
 Kickstart configuration script (automates install among other settings)
If you use PXELINUX with the pxelinux.0 file, the pxelinux.0 binary file itself, the
configuration file boot.cfg, the kernel, and other files are transferred by TFTP. When using
gPXE, only the gpxelinux.0 binary file and configuration file are transferred by TFTP. With
gPXE, you can use a Web server to transfer the kernel and other files required to install. Because
of this, gPXE can provide better performance/secuirty in some cases, although both work
effectively in testing and pxelinux.0 is simpler to implement.
** This has only been tested using a Linux environment using SYSLINUX to provide the PXE
boot functions. There are Microsoft Windows solutions available, although those solutions are
more complex and bloated, heavily relying third- party applications. **
Test environment consisted of:
SusSE 11 Server SP3 VM with the following services/packages/configurations;
20GB HD
2GB RAM
Apache2, TFTP, DHCP, HTTP , SYSLINUX (using pxelinux.0)
TFTP Setup and Configuration
In SuSE Linux by the default this is located off the root at /tftpboot.
This TFTP directory structure will contain all the files needed for PXE booting (except for the
DHCP and kickstart script
This includes:
- The contents of the ESXi .iso (installation files)
- The SYSLINUX files
- The bootloader and its configuration file (boot.cfg)
#Depending on system you may have to create a pointer to the tftp directory, here is an example;
A copy of the pxelinux.0 file needs to be copied into the tftp directory so it is located at;
/tftpboot/pxelinux.0
pxelinux.0 is used by SYSLINUX to boot the hosts. The following directory needs to be created;
/tftpboot/pxelinux.cfg
This directory contains the file that pxelinux.0 uses to boot. This file needs to be created and
named default, and is located at;
/tftpboot/pxelinux.cfg/default
This PXE script must point to two vital files: The boot loader configuration file boot.cfg,
which specifies the kernel, kernel options, boot parameters, and the bootloader itself – which in
our case is mboot.c32
boot.cfg -contains additional bootloader settings, one of which is vital: location of kickstart
script – see example from test lab below;
The generic template for this file is isolinux.cfg –which is located in ESXi installation media
DHCP Setup and Configuration
The DHCP configuration file is called dhcpd.conf and is located at;
/etc/sysconfig/dhcpd.conf
Specifically, it assigns an IP address and points to the PXELINUX file used (pxelinux.0 or
gpxelinux.0) which is in the TFTP directories. The bold data below is the contents the
dhcpd.conf file used in the test environment.
option subnet-mask 255.255.255.0;
option broadcast-address 169.254.0.255;
option domain-name "pxeserver.local";
option domain-name-servers 169.254.1.1;
option routers 169.254.0.1;
max-lease-time 604800;
authoritative ;
filename "pxelinux.0";
ddns-update-style none;
default-lease-time 86400;
ddns-update-style ad-hoc;
allow booting;
allow bootp;
# the following defines the IP pool that DHCP will provide for the hosts
subnet 169.254.0.0 netmask 255.255.255.0 {
range 169.254.0.100 169.254.0.150;
}
class pxeclients {
match if substring(option vendor-class-indentifier, 0, 9) = "PXEClient";
}
# The following entry will ensure the same IP is given every time it boots. This entry needs
to be repeated in dhcpd.conf (this file) for each host machine. (MAC addresses are needed)
host ESXi_1 {
hardware ethernet BA:DF:00:DF:C0:FF;
fixed-address 169.254.0.101;
option host-name ESXi_1;
}
Hosting Kickstartfile using HTTP/Apache2
The kickstart file mentioned above must be hosted using HTTP on the default web server. It
needs to be located on the web server root. For SuSE Linux this location is;
/srv/www/htdocs/
/srv/www/htdocs/kickstart.ks.cfg
For this environment the kickstart files is named ks, although it can be named anything.
# Remember, the kickstart file dictates how the machine will behave during/after installing
ESXi, such as rebooting, license agreements, and diskless hosts. The appropriate syntax must be
added/adjusted to conform to specific network needs.
Test.
After completing the above steps, making sure all files are in the correct directories and contain
the correct data the system can now be tested. Begin by restarting DHCP, TFTP, and HTTP
services.
Launch a host ( blank VM in VMware workstation was used in test environment) that is set to
boot form NIC and is on correct network. If all is correct, ESXi installation will begin.
ESXi PXE Server-Instructions/Documentation

More Related Content

What's hot

VMWare VSphere4 Documentation Notes
VMWare VSphere4 Documentation NotesVMWare VSphere4 Documentation Notes
VMWare VSphere4 Documentation Notes
Grit Suwa
 

What's hot (20)

olf10
olf10olf10
olf10
 
Asiabsdcon15
Asiabsdcon15Asiabsdcon15
Asiabsdcon15
 
Fsoss12
Fsoss12Fsoss12
Fsoss12
 
Self2013
Self2013Self2013
Self2013
 
Rhel 7/CentOS 7 boot process
Rhel 7/CentOS 7 boot processRhel 7/CentOS 7 boot process
Rhel 7/CentOS 7 boot process
 
Ilf2012
Ilf2012Ilf2012
Ilf2012
 
VMWare VSphere4 Documentation Notes
VMWare VSphere4 Documentation NotesVMWare VSphere4 Documentation Notes
VMWare VSphere4 Documentation Notes
 
Olf2013
Olf2013Olf2013
Olf2013
 
Oracle11g On Fedora14
Oracle11g On Fedora14Oracle11g On Fedora14
Oracle11g On Fedora14
 
Tlf2013
Tlf2013Tlf2013
Tlf2013
 
Tlf2012
Tlf2012Tlf2012
Tlf2012
 
Ilf2013
Ilf2013Ilf2013
Ilf2013
 
Sweden11
Sweden11Sweden11
Sweden11
 
Posscon2013
Posscon2013Posscon2013
Posscon2013
 
Centos
CentosCentos
Centos
 
3. introduction of centos
3. introduction of centos3. introduction of centos
3. introduction of centos
 
RPM Building
RPM BuildingRPM Building
RPM Building
 
Ilf2011
Ilf2011Ilf2011
Ilf2011
 
Open Source Backup Conference 2014: Rear, by Ralf Dannert
Open Source Backup Conference 2014: Rear, by Ralf DannertOpen Source Backup Conference 2014: Rear, by Ralf Dannert
Open Source Backup Conference 2014: Rear, by Ralf Dannert
 
Ftp configuration
Ftp configurationFtp configuration
Ftp configuration
 

Similar to ESXi PXE Server-Instructions/Documentation

Installation of ESX Server
Installation of ESX ServerInstallation of ESX Server
Installation of ESX Server
Luca Viscomi
 
Automatic systems installations and change management wit FAI - Talk for Netw...
Automatic systems installations and change management wit FAI - Talk for Netw...Automatic systems installations and change management wit FAI - Talk for Netw...
Automatic systems installations and change management wit FAI - Talk for Netw...
Henning Sprang
 
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESQuick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Jan Kalcic
 

Similar to ESXi PXE Server-Instructions/Documentation (20)

Installation of ESX Server
Installation of ESX ServerInstallation of ESX Server
Installation of ESX Server
 
VMware vSphere 4.1 deep dive - part 1
VMware vSphere 4.1 deep dive - part 1VMware vSphere 4.1 deep dive - part 1
VMware vSphere 4.1 deep dive - part 1
 
Deploying datacenters with Puppet - PuppetCamp Europe 2010
Deploying datacenters with Puppet - PuppetCamp Europe 2010Deploying datacenters with Puppet - PuppetCamp Europe 2010
Deploying datacenters with Puppet - PuppetCamp Europe 2010
 
Large-scale deploy by AutoYast
Large-scale deploy by AutoYastLarge-scale deploy by AutoYast
Large-scale deploy by AutoYast
 
Fedora Atomic Workshop handout for Fudcon Pune 2015
Fedora Atomic Workshop handout for Fudcon Pune  2015Fedora Atomic Workshop handout for Fudcon Pune  2015
Fedora Atomic Workshop handout for Fudcon Pune 2015
 
Slim Server Practical
Slim Server PracticalSlim Server Practical
Slim Server Practical
 
Automatic systems installations and change management wit FAI - Talk for Netw...
Automatic systems installations and change management wit FAI - Talk for Netw...Automatic systems installations and change management wit FAI - Talk for Netw...
Automatic systems installations and change management wit FAI - Talk for Netw...
 
OpenStack DevStack Configuration localrc local.conf Tutorial
OpenStack DevStack Configuration localrc local.conf TutorialOpenStack DevStack Configuration localrc local.conf Tutorial
OpenStack DevStack Configuration localrc local.conf Tutorial
 
Kickstart server
Kickstart serverKickstart server
Kickstart server
 
linux installation.pdf
linux installation.pdflinux installation.pdf
linux installation.pdf
 
Howto Pxeboot
Howto PxebootHowto Pxeboot
Howto Pxeboot
 
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESQuick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
 
Rac on NFS
Rac on NFSRac on NFS
Rac on NFS
 
Virtualization with Xen and openQRM 5.1 on Debian Wheezy
Virtualization with Xen and openQRM 5.1 on Debian WheezyVirtualization with Xen and openQRM 5.1 on Debian Wheezy
Virtualization with Xen and openQRM 5.1 on Debian Wheezy
 
Basic Linux Internals
Basic Linux InternalsBasic Linux Internals
Basic Linux Internals
 
PHP selber bauen
PHP selber bauenPHP selber bauen
PHP selber bauen
 
도커 없이 컨테이너 만들기 5편 마운트 네임스페이스와 오버레이 파일시스템
도커 없이 컨테이너 만들기 5편 마운트 네임스페이스와 오버레이 파일시스템도커 없이 컨테이너 만들기 5편 마운트 네임스페이스와 오버레이 파일시스템
도커 없이 컨테이너 만들기 5편 마운트 네임스페이스와 오버레이 파일시스템
 
Relax and Recover on POWER (Updated 05-2017)
Relax and Recover on POWER (Updated 05-2017)Relax and Recover on POWER (Updated 05-2017)
Relax and Recover on POWER (Updated 05-2017)
 
Oracle11g on fedora14
Oracle11g on fedora14Oracle11g on fedora14
Oracle11g on fedora14
 
Linux filesystemhierarchy
Linux filesystemhierarchyLinux filesystemhierarchy
Linux filesystemhierarchy
 

ESXi PXE Server-Instructions/Documentation

  • 1. PXE Server Configuration and Documentation This server will be used to PXE boot workstations and assign them IP addresses (utilizing Wake- On-Lan capable NICs on workstations), point to ESXi installation media and scripts, and to automatically install and run ESXi 5.1 (via a specially configured kickstart file). These ESXi hosts will then in turn host Virtual Machines for the lab. These VMs are administered and configured using VMware vSphere and vCenter server appliance, and NETLAB+ is used as a front-end for remote access for students and faculty to use or configure the VM pods (“pod” is a pre-configured group of VMs configured for NETLAB) and environments. The server must provide:  TFTP Server (to serve the ESXi installation files/media and the other necessary configuration files to the target hosts to start boot/installation)  DHCP Server (provide target ESXi hosts with IP addresses, points to PXE boot file)  Web Server (hosts the custom Kickstart script which automates the ESXi install, and possibly other files if using gpxelinux.0)  Access to the ESXi 5.1 files from .iso or other source, available to allow copies of the files  SYSLINUX (using either pxelinux.0 or gpxelinux.0 configuration files to boot)  Kickstart configuration script (automates install among other settings) If you use PXELINUX with the pxelinux.0 file, the pxelinux.0 binary file itself, the configuration file boot.cfg, the kernel, and other files are transferred by TFTP. When using gPXE, only the gpxelinux.0 binary file and configuration file are transferred by TFTP. With gPXE, you can use a Web server to transfer the kernel and other files required to install. Because of this, gPXE can provide better performance/secuirty in some cases, although both work effectively in testing and pxelinux.0 is simpler to implement. ** This has only been tested using a Linux environment using SYSLINUX to provide the PXE boot functions. There are Microsoft Windows solutions available, although those solutions are more complex and bloated, heavily relying third- party applications. ** Test environment consisted of: SusSE 11 Server SP3 VM with the following services/packages/configurations; 20GB HD 2GB RAM Apache2, TFTP, DHCP, HTTP , SYSLINUX (using pxelinux.0)
  • 2. TFTP Setup and Configuration In SuSE Linux by the default this is located off the root at /tftpboot. This TFTP directory structure will contain all the files needed for PXE booting (except for the DHCP and kickstart script This includes: - The contents of the ESXi .iso (installation files) - The SYSLINUX files - The bootloader and its configuration file (boot.cfg) #Depending on system you may have to create a pointer to the tftp directory, here is an example; A copy of the pxelinux.0 file needs to be copied into the tftp directory so it is located at; /tftpboot/pxelinux.0 pxelinux.0 is used by SYSLINUX to boot the hosts. The following directory needs to be created; /tftpboot/pxelinux.cfg This directory contains the file that pxelinux.0 uses to boot. This file needs to be created and named default, and is located at; /tftpboot/pxelinux.cfg/default
  • 3. This PXE script must point to two vital files: The boot loader configuration file boot.cfg, which specifies the kernel, kernel options, boot parameters, and the bootloader itself – which in our case is mboot.c32 boot.cfg -contains additional bootloader settings, one of which is vital: location of kickstart script – see example from test lab below; The generic template for this file is isolinux.cfg –which is located in ESXi installation media
  • 4. DHCP Setup and Configuration The DHCP configuration file is called dhcpd.conf and is located at; /etc/sysconfig/dhcpd.conf Specifically, it assigns an IP address and points to the PXELINUX file used (pxelinux.0 or gpxelinux.0) which is in the TFTP directories. The bold data below is the contents the dhcpd.conf file used in the test environment. option subnet-mask 255.255.255.0; option broadcast-address 169.254.0.255; option domain-name "pxeserver.local"; option domain-name-servers 169.254.1.1; option routers 169.254.0.1; max-lease-time 604800; authoritative ; filename "pxelinux.0"; ddns-update-style none; default-lease-time 86400; ddns-update-style ad-hoc; allow booting; allow bootp; # the following defines the IP pool that DHCP will provide for the hosts subnet 169.254.0.0 netmask 255.255.255.0 { range 169.254.0.100 169.254.0.150; } class pxeclients { match if substring(option vendor-class-indentifier, 0, 9) = "PXEClient"; } # The following entry will ensure the same IP is given every time it boots. This entry needs to be repeated in dhcpd.conf (this file) for each host machine. (MAC addresses are needed) host ESXi_1 { hardware ethernet BA:DF:00:DF:C0:FF; fixed-address 169.254.0.101; option host-name ESXi_1; }
  • 5. Hosting Kickstartfile using HTTP/Apache2 The kickstart file mentioned above must be hosted using HTTP on the default web server. It needs to be located on the web server root. For SuSE Linux this location is; /srv/www/htdocs/ /srv/www/htdocs/kickstart.ks.cfg For this environment the kickstart files is named ks, although it can be named anything. # Remember, the kickstart file dictates how the machine will behave during/after installing ESXi, such as rebooting, license agreements, and diskless hosts. The appropriate syntax must be added/adjusted to conform to specific network needs.
  • 6. Test. After completing the above steps, making sure all files are in the correct directories and contain the correct data the system can now be tested. Begin by restarting DHCP, TFTP, and HTTP services. Launch a host ( blank VM in VMware workstation was used in test environment) that is set to boot form NIC and is on correct network. If all is correct, ESXi installation will begin.