This document provides an introduction to using KVM for virtualization on IBM z Systems mainframes. It discusses the advantages of KVM on z Systems, the hardware and software components involved, and how to plan, install, configure, manage and monitor a KVM environment on an IBM mainframe. It also describes how to build a cloud infrastructure using KVM and IBM Cloud Manager with OpenStack.
8. vi Getting Started with KVM for IBM z Systems
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
DB2®
DS8000®
ECKD™
FICON®
FlashSystem™
Global Business Services®
IBM®
IBM FlashSystem®
IBM z™
IBM z Systems™
IBM z13™
PR/SM™
Processor Resource/Systems
Manager™
Redbooks®
Redbooks (logo) ®
Storwize®
System z®
XIV®
z Systems™
z/OS®
z/VM®
z13™
The following terms are trademarks of other companies:
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
9. IBM REDBOOKS PROMOTIONS
Find and read thousands of
IBM Redbooks publications
Search, bookmark, save and organize favorites
Get up-to-the-minute Redbooks news and announcements
Link to the latest Redbooks blogs and videos
Download
Now
Get the latest version of the Redbooks Mobile App
iOS
Android
Place a Sponsorship Promotion in an IBM
Redbooks publication, featuring your business
or solution with a link to your web site.
Qualified IBM Business Partners may place a full page
promotion in the most popular Redbooks publications.
Imagine the power of being seen by users who download
millions of Redbooks publications each year!
®
®
Promote your business
in an IBM Redbooks
publication
ibm.com/Redbooks
About Redbooks Business Partner Programs
IBM Redbooks promotions
12. x Getting Started with KVM for IBM z Systems
Dave Bennin, Don Brennan, Rich Conway, and Bob Haimowitz
IBM Global Business Services®, Development Support Team
Zhuo Hua Li and Hong Jin Wei
IBM China
Klaus Smolin, Tony Gargya, and Viktor Mihajlovski
IBM Germany
Now you can become a published author, too
Here’s an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time. Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
will help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.
Find out more about the residency program, browse the residency index, and apply onlinet:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us.
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form:
ibm.com/redbooks
Send your comments by email:
redbooks@us.ibm.com
Mail your comments:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Stay connected to IBM Redbooks
Find us on Facebook:
http://www.facebook.com/IBMRedbooks
Follow us on Twitter:
http://twitter.com/ibmredbooks
Look for us on LinkedIn:
13. Preface xi
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
16. 2 Getting Started with KVM for IBM z Systems
1.1 Why KVM for IBM z Systems
Today’s systems must be able to scale up and scale out, not only in terms of performance and
size, but also in functions. Virtualization is a core enabler of system capability, but open
source and standards are key to making virtualization effective.
KVM for IBM z Systems is an open source virtualization option for running Linux-centric
workloads, using common Linux-based tools and interfaces, while taking advantage of the
robust scalability, reliability, and security that is inherent to the IBM z Systems platform. The
strengths of the z Systems platform have been developed and refined over several decades
to provide additional value to any type of IT-based services.
KVM for IBM z Systems can manage and administer multiple virtual machines, allowing for
large numbers of Linux-based workloads to run simultaneously on the z Systems platform.
z Systems platforms also have a long history of providing security for applications and
sensitive data in virtual environments. It is the most securable platform in the industry, with
security integrated throughout the stack in hardware, firmware, and software.
1.1.1 Advantages of using KVM for IBM z Systems
KVM for IBM z Systems offers enterprises a cost-effective alternative to other hypervisors. It
has simple and familiar standard user interfaces, offering easy integration of the z Systems
platform into any IT infrastructure.
KVM for IBM z Systems can be managed to allow for over-commitment of system resources
to optimize the virtualized environment. This is described in 2.2.1, “Compute consideration”
on page 14.
In addition, KVM for IBM z Systems can help make platform mobility easier. Its live relocation
capabilities enable you to move virtual machines and workloads between multiple instances
of KVM for IBM z Systems without incurring downtime.
Table 1-1 lists some of the key features and benefits of KVM for IBM z Systems.
Note: Both KVM for IBM z Systems and Linux on z Systems are the same KVM and Linux
that run on other hardware platforms, with the same look and feel.
17. Chapter 1. KVM for IBM z Systems 3
Table 1-1 KVM for IBM z Systems key features
1.2 IBM z Systems and KVM
The z Systems platform is highly virtualized, with the goal of maximizing the use of compute
and I/O (storage and network) resources, and simultaneously lowering the total amount of
resources needed for your workloads. For decades, virtualization has been embedded in
z Systems architecture and built into the hardware and firmware.
Virtualization requires a hypervisor, which manages resources that are required for multiple
independent virtual machines. Hypervisors can be implemented in software or hardware, and
z Systems has both. The hardware hypervisor is known as IBM Processor Resource/Systems
Manager™ (PR/SM™). PR/SM is implemented in firmware as part of the base system. It fully
virtualizes the system resources and does not require additional software to run. KVM for
IBM z is a software hypervisor that uses PR/SM functions to service its virtual machines.
PR/SM enables defining and managing subsets of the z Systems resources in logical
partitions (LPARs). Each KVM for IBM z instance runs in a dedicated LPAR. The LPAR
definition includes several logical processing units (LPUs), memory, and I/O resources. LPUs
are defined and managed by PR/SM and are perceived by KVM for IBM z as real CPUs.
PR/SM is responsible for accepting requests for work on LPUs and dispatching that work on
physical CPUs. LPUs can be dynamically added to and removed from an LPAR. LPARs can
be added, modified, activated, or deactivated in z Systems platforms using the Hardware
Management Console (HMC).
Feature Benefits
KVM hypervisor Supports running multiple disparate Linux virtual
machines on a single system
CPU sharing Allows for the sharing of CPU resources by virtual
machines
I/O sharing Enables the sharing of I/O resources among virtual
machines
Memory and CPU over-commitment Supports the over-commitment of CPU, memory,
and swapping of inactive memory
Live virtual machine relocation Enables workload migration with minimal impact
Dynamic addition and deletion of virtual I/O
devices
Reduces downtime to modify I/O device
configurations for virtual machines
Thin-provisioned virtual machines Allows for copy-on-write virtual disks to save on
storage
Hypervisor performance management Supports policy based, goal-oriented management
and monitoring of virtual CPU resources
Installation and configuration tools Supplies tools to install and configure KVM for IBM
z Systems
Transactional execution use Provides improved performance for running
multi-threaded applications
18. 4 Getting Started with KVM for IBM z Systems
KVM for IBM z Systems also uses PR/SM to access storage devices and the network for
Linux on z Systems virtual machines (see Figure 1-1).
Figure 1-1 KVM running in z Systems LPARs
1.2.1 Storage connectivity
Storage connectivity is provided on the z Systems platforms by host bus adapters (HBAs)
called Fibre Connection (IBM FICON®) features. IBM FICON (FICON Express16S and
FICON Express8S) features follow Fibre Channel (FC) standards. They support data storage
and access requirements and the latest FC technology in storage devices.
The FICON features support the following protocols:
Native FICON
An enhanced protocol (over FC) that provides for communication with FICON devices,
such as disks, tapes, and printers. Native FICON supports IBM Extended Count Key Data
(ECKD™) devices.
Fibre Channel Protocol (FCP)
A standard protocol for communicating with disk and tape devices. FCP supports small
computer system interface (SCSI) devices.
Linux on z Systems and KVM for IBM z Systems can use both protocols by using the FICON
features.
19. Chapter 1. KVM for IBM z Systems 5
1.2.2 Network connectivity
Network connectivity is provided on the z Systems platform by the network interface cards
(NICs) called Open Systems Adapter (OSA) features. The OSA features (OSA-Express5S,
OSA-Express4S, and OSA-Express3) provide direct, industry-standard local area network
(LAN) connectivity and communication in a networking infrastructure.
OSA features use the z Systems I/O architecture, called queued direct input/output (QDIO).
QDIO is a highly efficient data transfer mechanism that uses system memory queues and a
signaling protocol to directly exchange data between the OSA microprocessor in the feature
and the network stack running in the operating system.
KVM for IBM z Systems can use the OSA features by virtualizing them for Linux on z Systems
to use.
For more information about storage and network connectivity for Linux on z Systems, see
TThe Virtualization Cookbook for IBM z Systems Volume 3: SUSE Linux Enterprise Server
12, SG24-8890:
http://www.redbooks.ibm.com/abstracts/sg248890.html
1.2.3 Hardware Management Console
The Hardware Management Console (HMC) is a stand-alone computer that runs a set of
management applications. The HMC is a closed system, which means that no other
applications can be installed on it.
The HMC can set up, manage, monitor, and operate one or more z Systems platforms. It
manages and provides support utilities for the hardware and its LPARs.
The HMC is used to install KVM for IBM z Systems and to provide an interface to the IBM z
Systems hardware for configuration management functions.
For details about the HMC, see Introduction to the Hardware Management Console in the
IBM Knowledge Center:
http://ibm.co/1PD5gFi
1.2.4 Open source virtualization
Kernel-based virtual machine (KVM) technology is a cross-platform virtualization technology
that turns the Linux kernel into an enterprise-class hypervisor by using the hardware
virtualization support built into the z Systems platform. This means that KVM for IBM z
Systems can do things such as scheduling tasks, dispatching CPUs, managing memory, and
interacting with I/O resources (storage and network) through PR/SM.
KVM for IBM z Systems creates virtual machines as Linux processes that run Linux on
z Systems images using a modified version of another open source module, known as a quick
emulator (QEMU). QEMU provides I/O device emulation and device virtualization inside the
virtual machine.
The KVM for IBM z Systems kernel provides the core virtualized infrastructure. It can
schedule virtual machines on real CPUs and manage their access to real memory. QEMU
runs in a user space and implements virtual machines using KVM module functions.
20. 6 Getting Started with KVM for IBM z Systems
QEMU virtualizes real storage and network resources for a virtual machine, which, in turn,
uses virtio drivers to access these virtualized resources, as shown in Figure 1-2.
Figure 1-2 Open source virtualization: KVM for IBM z Systems
The network interface in Linux on z Systems is a virtual Ethernet interface. The interface
name is eth. Multiple Ethernet interfaces can be defined to Linux and are handled by the
virtio_net device driver module.
In Linux, a generic virtual block device is used rather than specific devices, such as ECKD or
SCSI devices. The virtual block devices are handled by the virtio_blk device driver module.
For information about KVM, see KVM — an open cross-platform virtualization alternative, a
smarter choice:
http://www.ibm.com/systems/virtualization/kvm/
Browse KVM for IBM z Systems product publications in the IBM Knowledge Center:
http://www.ibm.com/support/knowledgecenter/linuxonibm/liaaf/lnz_r_kvm.html
21. Chapter 1. KVM for IBM z Systems 7
1.2.5 What comes with KVM for IBM z Systems
KVM for IBM z Systems provides standard Linux and KVM interfaces for operational control of
the environment, such as standard drivers and application programming interfaces (APIs), as
well as system emulation support and virtualization management. Included as part of KVM for
IBM z Systems are the following components:
The command-line interface (CLI) is a common, familiar Linux interface environment used
to issue commands and interact with the KVM hypervisor. The user issues a series of
successive lines of commands to change or control the environment.
Libvirt is open source software that resides on KVM and many other hypervisors to
provide low-level virtualization capabilities that interface with KVM through a CLI called
virsh. A list of key virsh commands is included in “Using virsh” on page 67.
The IBM z Systems Hypervisor Performance Manager (zHPM) monitors virtual machines
running on KVM to achieve goal-oriented policy-based performance goals (see
Appendix C, “Basic setup and use of zHPM” on page 103).
Open vSwitch (OVS) is open source software that allows for network communication
between virtual machines and the external networks that are hosted by the KVM
hypervisor. See this website for more information:
http://www.openvswitch.org
MacVTap is a device driver used to virtualize bridge networking and is based on the
mcvlan device driver. See this website for more information:
http://virt.kernelnewbies.org/MacVTap
QEMU is open source software that is a hardware emulator for virtual machines running
on KVM. It also provides management and monitoring functions for the KVM virtual
machines. For more information, see the QEMU.org wiki:
http://wiki.qemu.org
The installer offers a series of panels to assist and guide the user through the installation
process. Each panel has setting selections that can be made to customize the KVM
installation. See Chapter 3, “Installing and configuring the environment” on page 27 for
examples of the installer panels.
Nagios remote plug-in executor (NRPE) can be used with KVM for IBM z. NRPE is an
addon that allows you to execute plug-ins on KVM for IBM z. With those plug-ins, you can
monitor resources, such as disk usage, CPU load, and memory usage. For more
information, see “Configuring the Nagios monitoring tool” on page 64.
22. 8 Getting Started with KVM for IBM z Systems
1.3 Managing the KVM for IBM z Systems environment
KVM for IBM z Systems integrates with standard OpenStack virtualization management,
which enables enterprises to easily integrate Linux servers into their infrastructure and cloud
offerings.
KVM for IBM z Systems supports libvirt APIs to enable CLIs (and custom scripting) to be used
to administer the hypervisor. KVM can be administered using open source tools, such as
virt-manager or OpenStack. KVM for IBM z Systems can also be administered and managed
by using IBM Cloud Manager with OpenStack (see Figure 1-3 on page 8). IBM Cloud
Manager is created and maintained by IBM and built on OpenStack.
Figure 1-3 KVM for IBM z Systems management interfaces
KVM for IBM z Systems can be managed just like any another KVM hypervisor by using the
Linux CLI. The Linux CLI provides a familiar experience for platform management.
In addition, an open source tool called Nagios can be used to monitor the KVM for IBM z
Systems environment.
Libvirt provides different methods of access through a layered approach, from a command
line called virsh in the libvirt tools layer to a low-level API for many programming languages
(see Figure 1-4).
Figure 1-4 KVM management via libvirt API layers
Hardware
Hypervisor layer
libvirtd
libvirt API layer
libvirt tools layer
Application layer
23. Chapter 1. KVM for IBM z Systems 9
The main component of the libvirt software is the libvirtd daemon. This is the component that
interacts directly with QEMU and the KVM kernel at the hypervisor layer. QEMU manages
and monitors the KVM virtual machines by performing the following tasks:
Manage the I/O between virtual machines and KVM
Create virtual disks
Change the state of a virtual machine:
– Start a virtual machine
– Stop a virtual machine
– Suspend a virtual machine
– Resume a virtual machine
– Delete a virtual machine
– Take and restore snapshots
See the libvirt website for more information about libvirt:
http://libvirt.org
1.3.1 IBM z Systems Hypervisor Performance Manager (zHPM)
zHPM monitors and manages workload performance of the virtual machines under KVM by
performing the following operations:
Detect when a virtual machine is not achieving its goals when it is a member of a
Workload Resource Group.
Determine whether the virtual machine performance can be improved with additional
resources.
Project the impact on all virtual machines of the reallocation of resources.
Redistribute processor resources if there is a good trade-off based on policy.
For more information, see Introduction to zHPM in the IBM Knowledge Center:
http://ibm.co/1japece
zHPM setup instructions and examples are in Appendix C, “Basic setup and use of zHPM” on
page 103.
1.4 Using IBM Cloud Manager with OpenStack
OpenStack is a cloud-based operating system that controls large pools of compute, storage,
and networking resources throughout a data center. It is based on the Open Stack project:
http://www.openstack.org/
IBM Cloud Manager with OpenStack is an advanced management solution that is created
and maintained by IBM and built on OpenStack. It can be used to get started with a cloud
environment and continue to scale with users and workloads, providing advanced resource
management with simplified cloud administration and full access to OpenStack APIs.
24. 10 Getting Started with KVM for IBM z Systems
KVM for IBM z Systems compute nodes support the following OpenStack services:
Nova libvirt driver
Neutron agent for Open vSwitch
Ceilometer support
Cinder
OpenStack compute node has an abstraction layer for compute drivers to support different
hypervisors, including QEMU and KVM for IBM z Systems through the libvirt API layer (see
Figure 1-4 on page 8).
26. 12 Getting Started with KVM for IBM z Systems
2.1 Planning KVM for IBM z Systems
The supported hardware and software need to be configured as described in this chapter
before installation of KVM for IBM z Systems. An installation method also needs to be
determined, as described in this section.
2.1.1 Hardware requirements
The supported servers, storage hardware, and network features described in the subsections
that follow need to be confirmed before the installation begins.
Servers
The following servers are supported only with regard to the Integrated Facilities for Linux
(IFLs) that are activated:
IBM z13™
IBM zEC12
IBM zBC12
Storage
KVM for IBM z Systems supports small computer system interface (SCSI) devices and
extended count key data (IBM ECKD) devices. You can use either SCSI or ECKD devices or
both. The following storage devices are supported:
SCSI devices:
– IBM XIV®
– IBM Storwize® V7000
– IBM FlashSystem™
– SAN Volume Controller
– IBM DS8000® (FCP attached)
ECKD devices:
– DS8000 (IBM FICON attached)
The Fibre Channel protocol (FCP) channel supports multiple switches and directors and can
be placed between the IBM z Systems server and the SCSI device. This can help to provide
more choices for storage solutions or the ability to use existing storage devices. ECDK
devices can help to manage disks efficiently because KVM and Linux do not have to manage
the I/O path or load balancing, because these are already managed by IBM z Systems
hardware. You can choose SCSI devices, ECKD devices, or both for the KVM environment.
Host bus adapters
The following FICON features support connectivity to both SCSI and ECKD devices:
FICON Express16S
FICON Express8S
Network interface cards
The following Open Systems Adapter (OSA) features are supported:
IBM OSA-Express5S
IBM OSA-Express4S
IBM OSA-Express3 (zEC12 and zBC12 only)
With this OSA feature, KVM for IBM z Systems does not support VLANs or flat networks
together with Open vSwitch1
.
27. Chapter 2. Planning the environment 13
Logical partitions (LPARs) for KVM
When you define and allocate resources to LPARs on which KVM is installed, consider CPU
and memory needs:
CPU
A minimum of 1 CPU (known as Integrated Facility for Linux, or IFL) must be assigned to
the KVM LPAR. The suggestion is to assign no more than 36 IFLs per KVM LPAR.
Memory
A maximum of 8 TB of RAM can be allocated per KVM LPAR. The suggestion is to
allocate no more than 1 TB of RAM per KVM LPAR.
For the IBM z Systems platform, your system must be at the proper firmware or microcode
level. At the time of writing, these were the appropriate levels:
For z13: N98805.010 D22H Bundle 20a
For zEC12 and zBC12: H49525.013 D15F Bundle 45a
For more information, search the Preventative Service Planning buckets web page:
http://www.software.ibm.com/webapp/set2/psp/srchBroker
Search for the following PSP hardware upgrade identifiers:
For the IBM z13, the PSP bucket is 2964DEVICE.
For the IBM zEC12, the PSP bucket is 2827DEVICE.
For the IBM zBC12, the PSP bucket is 2828DEVICE.
2.1.2 Software requirements
The following software resources are required:
KVM for IBM z Systems V1.1.0 (Product Number 5648-KVM)
KVM for IBM z Systems can be ordered and delivered electronically using the IBM Shopz:
http://www.ibm.com/software/ShopzSeries
After you download the ISO file from IBM Shopz, you can use it to install from an FTP
server or burn a DVD and use that for the installation.
The latest available Fix Pack for KVM for IBM z Systems
KVM for IBM z Systems 1.1.0.1 contains the current, cumulative fix packs. Download
these from IBM Fix Central:
http://www.ibm.com/support/fixcentral/
1 Open vSwitch is a multilayer virtual switch. For details, see this website: http://openvswitch.org/.
28. 14 Getting Started with KVM for IBM z Systems
2.1.3 Installation methods
You can install KVM for IBM z Systems using either of the following methods:
From an FTP server, where the FTP server is in the same subnet as the Hardware
Management Console (HMC).
From a DVD (or a CD with a capacity of 800 MB or greater) that you create, containing the
install images. An FTP server is also required, but this method does not require the FTP
server to be in the same subnet as the IBM HMC. You will need to copy and create the
.ins and .prm files that correspond with your environment and burn them with the ISO
image to the physical DVD or CD.
More details about performing the installation from a DVD are available in KVM for IBM z
Systems: Planning and Installation Guide, SC27-8236-00 in the IBM Knowledge Center:
http://ibm.co/1Qxm1BW
The FTP server must be accessible from the target installation LPAR.
We chose the FTP server method of installation because it has more flexibility for creating
and updating the generic .prm file that is needed during installation. Before the installation,
we prepared the FTP server in our scenario to be in the same subnet as the HMC. Details of
the installation method from an FTP server are provided in Chapter 3, “Installing and
configuring the environment” on page 27.
2.2 Planning virtualized resources for KVM virtual machines
After installing KVM for IBM z Systems, you can plan and design the virtualized environments
to build (including CPU, memory, storage, and network) and run the virtual machines on KVM.
When adding virtual machines, you must create .xml files to define your virtual resources.
The following describes the consideration of virtual resources when you define the virtual
machines.
2.2.1 Compute consideration
The virtual CPUs and memory can be configured, and these are available for the defined
virtual machine using the vcpu and memory elements in the .xml file of your virtual machine.
KVM supports CPU and memory over-commitment. To maximize performance, it is
suggested that you define the minimum number of virtual CPUs and memory necessary for
each virtual machine. If you allocate more virtual CPUs to the virtual machines than are
needed, the system works, but this configuration can cause performance degradation as the
virtual machines increase in numbers. Consider these suggestions:
CPU:
– The suggested over-commit ratio of CPUs is 10:1 (virtual-to-real). The real CPUs in
this case are the IFLs assigned to the KVM LPAR.
– Do not define more virtual CPUs to a virtual machine than the number of IFLs assigned
to the KVM LPAR. The maximum number of virtual CPUs per virtual machine is 64.
Note: You must prepare your own FTP server and upload the ISO file for KVM for IBM z
Systems to the FTP server before installation. The installation method you select depends
on the subnet of FTP server.
29. Chapter 2. Planning the environment 15
Memory:
– The suggested over-commit ratio of memory is 2:1 (virtual-to-real).
You can configure the CPU weight of a virtual machine, and you can modify it during
operation. The CPU shares of a virtual machine are calculated by forming the weight-fraction
of the virtual machine. CPU weight is helpful for managing your virtual machines by priority or
server workload. Additional details and examples of CPU share are available under “CPU
management” in KVM Virtual Server Management, SC34-2752-00:
http://ibm.co/1PQkXHW
2.2.2 Storage consideration
KVM supports virtualization of several storage devices on a KVM LPAR. You can typically use
block devices or disk image files to connect with local storage devices on the virtual machine.
Block device
A virtual machine that uses block devices for local mass storage typically performs better than
a virtual machine that uses disk image files. The virtual machine that uses block devices
achieves lower-latency and higher throughput because it minimizes the number of software
layers through which it passes. Figure 2-1 shows the block devices that QEMU can use for
KVM virtual machines.
Figure 2-1 Block devices for KVM virtual machines
KVM
sda sdb
LUN 0001 LUN 0002
sdb1
sdb2
LUN 0003 LUN 0004
vm02-lv
SCSI
LPAR
QEMU QEMU
VM01
Linux
VM02
Linux
vda vdavdb vdb
sda sdb1 sdb2 vm02-lv
dasda dasdb
Device
6201
Device
6202
dasdb1
dasdb2
Device
6203
Device
6204
vm04-lv
ECKD
QEMU QEMU
VM03
Linux
VM04
Linux
vda vdavdb vdb
dasda dasdb1 dasdb2 vm04-lv
VolGroup01 VolGroup02
30. 16 Getting Started with KVM for IBM z Systems
The following block devices are supported by QEMU:
Entire devices
A physical disk, such as SCSI and ECKD devices can be defined as a virtual disk of a
virtual machine. A virtual machine uses all of the physical disk space that it manages.
Example 2-1 shows a sample .xml file that defines a virtual disk for managing all of the
disk space of the physical devices that it manages.
Example 2-1 Sample .xml for entire devices of VM01
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/sda'/>
<target dev='vda' bus='virtio'/>
</disk>
Disk partitions
KVM for IBM z Systems can partition a physical disk. Each partition can be allocated to
the same or different virtual machines. This can help to use large physical disk space
more efficiently.
Example 2-2 shows a sample .xml file to define a virtual disk to use partitions.
Example 2-2 Sample .xml for disk partitions of VM01
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/sdb1'/>
<target dev='vdb' bus='virtio'/>
</disk>
Logical volume manager (LVM) logical volumes
KVM can create and manage logical volumes using LVM. This makes it easier to manage
the available storage in general, and it also makes it easier to back up your virtual
machines without shutting them down, thanks to LVM snapshots.
Example 2-3 shows a sample .xml file to define a virtual disk to use logical volumes.
Example 2-3 Sample .xml for logical volumes of VM02
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/VolGroup00/LogVol00'/>
<target dev='vda' bus='virtio'/>
</disk>
The following requirements must be considered when choosing to use block devices:
All block devices must be available and accessible to the hypervisor. The virtual machine
cannot access devices that are not available from the hypervisor.
You must activate or enable some block devices before you can use the block devices. For
example, LVM volumes must be running.
31. Chapter 2. Planning the environment 17
File
A disk image file is a file that represents a local hard disk to the virtual machine. This
representation is a virtual hard disk. The size of the disk image file determines the maximum
size of the virtual hard disk. A disk image file of 100 GB can produce a virtual hard disk of 100
GB.
The disk image file is in a location outside of the virtual machine. Other than the size of the
disk image file, the virtual machine cannot access any other information about the disk image
file. The disk image file is in the file system of any block devices shown in Figure 2-1 on
page 15 that are mounted on KVM. However, disk image files can also be located across a
network connection in a remote file system, for example.
The following file types are supported by QEMU:
Raw
A raw type of disk image file preallocates all of the storage space that the virtual machine
uses when the file is created. The file resides in the KVM file system, and it requires less
overhead than QEMU Copy On Write (QCOW2).
Example 2-4 shows a sample .xml file to define a raw image file.
Example 2-4 Sample .xml to use a raw type of disk image file
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/var/lib/libvirt/images/sl12sp0.img'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
</disk>
QCOW2
QCOW uses a disk storage optimization strategy that delays the allocation of storage until
it is actually needed. A QCOW2 disk image file grows as data is written. QCOW2 starts
with a smaller size than the raw disk image file. QCOW2 can use the file system space of
the KVM host more efficiently.
Example 2-5 shows a sample .xml file that defines a QCOW2 image file.
Example 2-5 Sample .xml to use QCOW2 disk image file
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/sl12sp0.qcow2'/>
<target dev='vda' bus='virtio'/>
</disk>
A virtual machine that uses block devices for local mass storage typically performs better than
a virtual machine that uses disk image files for the following reasons:
Managing the file system where the disk image file is located creates an additional
resource demand for I/O operations.
Improper partitioning of mass storage using disk image files can cause unnecessary I/O
operations.
32. 18 Getting Started with KVM for IBM z Systems
However, disk image files provide the following benefits:
Containment
Many disk image files can be in a single storage unit. For example, disk image files can be
located on disks, partitions, logical volumes, and other storage units.
Usability
Managing multiple files is easier than managing multiple disks, multiple partitions, multiple
logical volumes, multiple arrays, and other storage units.
Mobility
You can easily move files from one location or system to another location or system.
Cloning
You can easily copy and modify files for new VMs to use.
Sparse files save space
Using a file system that supports sparse files conserves unaccessed disk space.
Remote and network accessibility
Files can be in file systems on remote systems that are connected by a network.
2.2.3 Network consideration
KVM can provide network devices as virtual Ethernet devices by configuring direct MacVTap2
connections or Open vSwitch connections. To set up a virtual network on KVM, for the
purposes of this book, we considered the following factors:
For redundancy of network devices, we considered bonding two IBM Open Systems
Adapters (OSAs). Both MacVTap and Open vSwitch can be configured with a bonding
device.
In a cloud environment, it is typical to separate the management network from the data
network. For isolation between multiple networks, we prepared and set up separate OSA
devices, each connected to a different network.
As of this writing, Open vSwitch is supported by IBM Cloud Manager with OpenStack, but
MacVTap is not yet supported.
We chose to use Open vSwitch in our configuration because it is supported by IBM Cloud
Manager with OpenStack. Open vSwitch also provides more flexibility and ease of
management by the use of a command-line interface (CLI) and a database that stores
network information, but it reduces complexity as compared to MacVTap managed by CLI and
an .xml file.
Important: Whether you use SCSI devices or ECKD devices, disk multipathing for
virtual machines is not required. For SCSI devices, disk multipathing is handled by KVM
for IBM z System. For ECKD devices, the I/O paths are handled by PR/SM in z Systems
hardware.
2
MacVTap is a new device driver meant to simplify virtualized bridged networking. For more information, see
http://virt.kernelnewbies.org/MacVTap
33. Chapter 2. Planning the environment 19
2.2.4 Software consideration
To operate Linux on z Systems as a virtual machine of KVM for IBM z Systems, a Linux on z
Systems distribution must be obtained from a Linux distribution partner. SUSE Linux
Enterprise Server (SLES) 12 SP1 is supported in KVM for IBM z Systems hypervisor virtual
machines.
2.2.5 Live migration
To perform a live migration, the source and destination hosts must be connected and have
access to the same or equivalent system resources, and to the same storage devices and
networks. There are no restrictions on the location of the destination host; it can run on
another LPAR on another server or on another z System.
Consider system resources, storage, network, and performance when you prepare for the
migration of a virtual machine to another host, and do so carefully. Details are available in the
KVM Virtual Server Management section of the IBM Knowledge Center:
http://ibm.co/1PD9s89
2.3 Planning KVM virtual machine management
Libvirt3 is a management tool that installs with KVM. You can create, delete, run, stop, and
manage your virtual machines using the virsh command, which is provided as part of the
API. Virsh operations rely on the ability of the library to connect to a running libvirtd daemon.
Therefore, the daemon must be running before using virsh.
When you plan to manage a virtual environment on KVM as one of the resources in the cloud,
IBM Cloud Manager with OpenStack can support it. To manage your virtual environment with
IBM Cloud Manager with OpenStack, you will need to review the hardware, operating system,
and software prerequisites of IBM Cloud Manager with OpenStack. IBM Cloud Manager with
OpenStack supports KVM for IBM z Systems as compute nodes. You also need to consider
KVM for IBM z Systems prerequisites in a virtualization environment:
IBM Cloud Manager with Open Stack prerequisites
http://ibm.co/1OiaXWb
KVM for IBM z Systems prerequisites
http://ibm.co/1PD9zRg
2.4 Planning a cloud infrastructure with KVM and
IBM Cloud Manager with OpenStack
In this book, we illustrate a simple scenario for building a cloud infrastructure with KVM and
IBM Cloud Manager with OpenStack to evaluate the virtualization and management
functions. These functions include the ability to create, delete, run, and stop the virtual
machine, to create a virtual network and virtual storage, to perform live migration, and to
clone a virtual machine. This section provides information to review before building your cloud
environment.
3 Libvirt is a management tool that installs with KVM. Visit http://wiki.libvirt.org/page/Virtio
34. 20 Getting Started with KVM for IBM z Systems
In this section, we describe planning considerations and information about the following
situations:
KVM installation
Virtual machines
IBM Cloud Manager with OpenStack installation
IBM Cloud Manager with OpenStack deployment
If you plan to build and manage a virtual environment using only KVM, skip the following
sections:
2.4.3, “Planning for IBM Cloud Manager with OpenStack installation” on page 22
2.4.4, “Planning for IBM Cloud Manager with OpenStack deployment” on page 24
2.4.1 Planning for KVM for IBM z Systems installation
This section describes the considerations for installing KVM for IBM z Systems. Then we
outline the information required for the installation process.
Planning considerations
Consider the following areas before installing KVM for IBM z Systems:
Number of CPUs in LPAR
This depends on the number of virtual CPUs needed and the level of planned
over-commitment.
Amount of memory in LPAR
This depends on the memory needed for the virtual machines and the level of planned
memory over-commitment.
DVD or FTP installation
As described in 2.1.3, “Installation methods” on page 14, it is possible to start the
installation from HMC using a DVD drive or from an FTP server. This depends on your
environment.
Type of storage
Choose either SCSI or ECKD devices that KVM for IBM z Systems will use.
Storage space for virtual machines
Consider how to provide storage to virtual machines. For example, do you plan to use
whole disks attached to virtual machines or a QCOW2 file? Do you plan to expand LVM?
Number of OSA ports and networking
KVM for IBM z Systems needs only one OSA port. However, to provide redundancy, it is
suggested that you use a bonding interface and more than one OSA port.
Networking for virtual machines
Consider how your virtual machines will be connected to the LAN. For example, will you be
using MacVTap or Open vSwitch? Will you use VLANs? If you will be using Open Switch,
how many Open vSwitches are needed?
35. Chapter 2. Planning the environment 21
Information required for installation
The following is a list of information that you will need during installation:
FTP information
IP address of the FTP server, FTP directory with required files, FTP credentials
OSA device address
The OSA triplet which will be used to create the KVM for IBM z Systems network interface
card (NIC)
Networking information
For KVM for IBM z Systems, the IP address, network mask, default gateway, and host
name
VLAN (if needed)
Parent interface of VLAN, VLAN ID
DNS (if needed)
IP addresses of DNS servers, search domain
Network time protocol (NTP) (if needed)
Addresses of NTP servers to be used by KVM for IBM z
Installation disks
If you are installing on SCSI devices, the following information is required to establish a
path to the related storage:
– FCP device address
– The target WWPN (disk storage subsystem WWPN)
– LUN ID
If installing on ECKD devices, the DASD device address is required.
Root password
The password for the root user
2.4.2 Planning for virtual machines
This section describes the considerations for virtual machines. Then, we outline the
information required for the installation process.
Planning considerations
Consider the following areas before installing a virtual machine:
Number of virtual CPUs
Amount of memory
Virtual machines need to have enough memory to avoid paging. However, too much
memory for a virtual machine will leave less shared memory for other virtual machines.
Installation source
Storage space for virtual machines
Consider how to provide storage to virtual machines. For example, do you plan to use
whole disks attached to virtual machines, or a QCOW2 file? Do you plan to expand LVM?
36. 22 Getting Started with KVM for IBM z Systems
I/O drivers
Use virtio drivers. There are no specific drivers for SCSI, ECKD, and NICs in virtual
machines.
Multipath
No disk multipathing is needed in virtual machine. All of that is handled by KVM. See the
shaded box marked “Important” on page 18 for further information.
Networking
Plan how many virtual network adapters will be needed for a virtual machine and whether
they will handle VLAN tags.
Information required for installation
The following list depends on the operating system that will be installed. This type of
information is required during installation:
FTP information (assuming FTP installation)
IP address of FTP server, FTP directory with required files, FTP user identification and
password
Networking information
Virtual machine IP address, network mask and default gateway, host name
VLAN
Parent interface of VLAN, VLAN ID
DNS (if needed)
IP addresses of DNS servers, search domain
NTP (if needed)
IP addresses of NTP servers to be used by the virtual machine
File system layout
2.4.3 Planning for IBM Cloud Manager with OpenStack installation
This section describes areas to consider for when planning to install IBM Cloud Manager with
OpenStack. Then, we outline the information that is required for the installation process.
If you plan to build and manage a virtual environment using only KVM, skip this section.
Planning considerations
Consider the following before installing IBM Cloud Manager with OpenStack:
Hardware
The deployment server and controller for IBM Cloud Manager with OpenStack 4.3 do not
support installation on a z Systems platform. An x86 server, with its CPU, memory, disk,
and NIC, is needed for the cloud environment. For detailed information about the hardware
prerequisites, see IBM Cloud Manager with OpenStack hardware prerequisites in the IBM
Knowledge Center:
http://ibm.co/1SJUM54
Also, consider whether you will install and run the deployment server, controller, and
database server on the same or separate nodes.
37. Chapter 2. Planning the environment 23
Operating systems
At the time of writing, Red Hat Enterprise Linux Version 7.1 (64-bit) is supported for the
deployment and controller servers on an x86 server.
Database server
Determine the database server product that will be used for IBM Cloud Manager with
OpenStack databases. As of this writing, supported databases are IBM DB2®, Maria DB,
and My SQL.
Yum repository
Use Red Hat Subscription Management or a local yum repository.
Installation method
Install from DVDs or by downloading and installing packages using CLI, GUI, or silent
installation.
Information required for installation
The following information is required during installation:
Networking information
IP address, network mask and default gateway, host name with a fully qualified domain
name that includes the domain suffix
DNS server
IP address of the DNS server which has the host name for the deployment server
Yum repository
IP address or host name of the repository server and directory
Root password or user ID with root authority
Root authority is required to run the installer
NTP server
IP addresses of NTP servers to be used by the deployment server and all nodes
Systemd4 status
Must be in running status because the product installer requires a functional systemd
environment and systemd is used to manage the service state of the Chef server
4
systemd is a suite of basic building blocks for a Linux system. Visit
http://www.freedesktop.org/wiki/Software/systemd/.
38. 24 Getting Started with KVM for IBM z Systems
2.4.4 Planning for IBM Cloud Manager with OpenStack deployment
This section describes considerations for deploying the controller and compute node. Then,
we outline the information required for the deployment process.
If you plan to build and manage a virtual environment using only KVM, skip this section.
Planning considerations
Consider the following before deploying cloud environment components, such as the
controller node, compute node, and database node:
Topology
There are five kinds of predefined topologies provided by IBM Cloud Manager with
OpenStack. A description of each topology is shown in Table 5-1 on page 79. Consider
which topology will be used.
Database server
Determine the database server product that will be used for IBM Cloud Manager with
OpenStack databases. As of this writing, supported databases are DB2, Maria DB, and
My SQL.
Number of NICs
Only one NIC is needed for the management network of KVM for IBM z Systems as a
compute node. However, if you want virtual machines on compute node to use the DHCP
and L3 services provided by Neutron5
, the controller and compute nodes must have at
least two NICs: One for the management network and one for the data network.
Network type
Determine one of network types among local, flat, VLAN, generic routing encapsulation
(GRE), and virtual extensible LAN (VXLAN).
Web browsers
Select a web browser on your desktop environment as the client to access the IBM Cloud
Manager with OpenStack servers. These are the minimum supported versions:
– Internet Explorer 11.0 with latest fix pack
– Firefox 31 with latest fix pack
– Chrome 38 with latest fix pack
– Safari 7 with latest fix pack
Information required for deployment
This list depends on the topology that will be used, but this type of information is usually
required during installation:
Controller node
Environment name
IP address
Network interface name
Open vSwitch network type
Fully qualified domain name
The root user login information, either password or Secure Shell (SSH) or identity file
5
OpenStack Networking (neutron), see either:
http://docs.openstack.org/icehouse/install-guide/install/apt/content/basics-networking-neutron.html
or https://wiki.openstack.org/wiki/Neutron#OpenStack_Networking_.28.22Neutron.22.29
39. Chapter 2. Planning the environment 25
Compute node for KVM for IBM z Systems
Topology name of compute node
Environment name
Fully qualified domain name
The root user login information (either password or SSH identity file)
IP address
Network interface name
Deployment of virtual machines
Network information, including subnet, IP address for the subnet, IP address of gateway,
IP version, DNS server
Image source location and image file name
Image format (for example QCOW2)
Minimum disk and minimum RAM (if needed)
42. 28 Getting Started with KVM for IBM z Systems
3.1 Our configuration
This section describes our target configuration and the components and hardware resources
that we use to implement it.
3.1.1 Logical view
Figure 3-1 illustrates a logical view of our target configuration. Our goal is to allow virtual
machines to connect to two different networks: One for management traffic and the other for
user data traffic. This is achieved by creating two separate Open vSwitch bridges. KVM for
IBM z Systems is connected directly to the management network.
We implemented two KVM for IBM z Systems images with the same logical configuration so
that the virtual servers can be migrated between hypervisors as needed.
Figure 3-1 Logical configuration
3.1.2 Physical resources
Figure 3-2 on page 29 shows our hardware and connectivity setup:
One IBM z13 with two LPARs
Two OSA cards connected to the management network
Two OSA cards connected to a data network
Multiple FICON cards for connectivity to storage
– SCSI devices
– ECKD devices
One FTP server
One x86 server running IBM Cloud Manager with OpenStack (controller node)
Both LPARs have access to all resources. We used one LPAR for installing KVM for IBM z
Systems on SCSI devices and the other LPAR for installing KVM for IBM z on ECKD devices.
Open vSwitch
(vsw-mgmt)
Open vSwitch
(vsw-data)
Management
Network
Data
Network
Virtual
Machine
Virtual
Machine
Virtual
Machine
KVM
Management
43. Chapter 3. Installing and configuring the environment 29
Figure 3-2 Our environment - hardware resources and connectivity
3.1.3 Preparation tasks
There are several tasks to perform before the KVM for IBM z installer can be started, which
we explain in the subsections that follow:
Input/output configuration data set (IOCDS)
Storage area network (SAN)
FTP server
Input/output configuration data set (IOCDS)
An IOCDS was prepared to support our environment, as shown in Figure 3-2. We had two
logical partitions (A25 and A2F) with different channel types (OSA CHPIDs, FCP CHPIDs,
and FICON CHPIDs).
44. 30 Getting Started with KVM for IBM z Systems
An IOCDS sample for the LPARs and each channel type is provided in Example 3-1.
Example 3-1 Sample IOCDS definitions
******************************************************
**** Sample LPAR and Channel Subsystem ******
******************************************************
RESOURCE PARTITION=((CSS(0),(A25,5),(A2F,F)))
******************************************************
**** Sample OSA CHPID / CNTLUNIT and IODEVICE ******
******************************************************
CHPID PATH=(CSS(0),04),SHARED, *
PARTITION=((CSS(0),(A25,A2F),(=))), *
PCHID=214,TYPE=OSD
CNTLUNIT CUNUMBR=2D00, *
PATH=((CSS(0),04)), *
UNIT=OSA
IODEVICE ADDRESS=(2D00,015),CUNUMBR=(2D00),UNIT=OSA
IODEVICE ADDRESS=(2D0F,001),UNITADD=FE,CUNUMBR=(2D00), *
UNIT=OSAD
******************************************************
**** Sample FCP CHPID / CNTLUNIT and IODEVICE ******
******************************************************
CHPID PATH=(CSS(0),76),SHARED, *
PARTITION=((CSS(0),(A25,A2F),(=))), *
PCHID=1B1,TYPE=FCP
CNTLUNIT CUNUMBR=B600, *
PATH=((CSS(0),76),UNIT=FCP
IODEVICE ADDRESS=(B600,032),CUNUMBR=(B600),UNIT=FCP
IODEVICE ADDRESS=(B6FC,002),CUNUMBR=(B600),UNIT=FCP
******************************************************
**** Sample FICON CHPID / CNTLUNIT and IODEVICE ******
******************************************************
CHPID PATH=(CSS(0),48),SHARED, *
PARTITION=((CSS(0),(A25,A2F),(=))), *
SWITCH=61,PCHID=11D,TYPE=FC
CNTLUNIT CUNUMBR=6200, *
PATH=((CSS(0),48)),UNITADD=((00,256)), *
LINK=((CSS(0),08)),CUADD=2,UNIT=2107
IODEVICE ADDRESS=(6200,042),CUNUMBR=(6200),STADET=Y,UNIT=3390B
IODEVICE ADDRESS=(622A,214),CUNUMBR=(6200),STADET=Y,SCHSET=1, *
UNIT=3390A
For more information about IOCDS, see Stand-Alone Input/Output Configuration Program
User’s Guide, IBM System z, SB10-7152:
http://www.ibm.com/support/docview.wss?uid=pub1sb10715206
45. Chapter 3. Installing and configuring the environment 31
Storage area network (SAN)
The SAN configuration usually involves tasks such as cabling, zoning, and LUN masking. We
defined 10 LUNs on disk storage and targeted the worldwide port names (WWPNs) of the
disk adapters.
FTP server
We used an FTP server with IP address 192.168.60.15 and FTP user credentials. We created
two directories in the FTP directory: KVM and SLES12SP1. In each directory, we created a
DVD1 directory to which we mounted the corresponding .iso file.
Because the DVD1 directory is mounted as read-only, and because we needed to create
various .ins and .prm files, we copied the DVD1/images directory to the main KVM directory
and created .ins files in that directory. Then, we created corresponding .prm files in the
images/ directory.
The resulting structure looks like this:
KVM/
– DVD1/ (KVM for IBM z ISO image mounted as read-only)...
– images/
• generic.prm
• initrd.addrsize
• initrd.img
• install.img
• itso1.prm
• itso2.prm
• kernel.img
• TRANS.TBL
• upgrade.img
– itso1.ins
– itso2.ins
SLES12SP1/
– DVD1/ (SLES12SP1 ISO image mounted as read-only)
3.2 Setting up KVM for IBM z Systems
This section list the steps needed to install KVM for IBM z, from preparation tasks, through
the installation process, to the final configuration for our environment.
We describe the following tasks in this section:
Preparing the .ins and .prm files
Installing KVM for IBM z
Configuring KVM for IBM z
Note: This section shows the installation and configuration of KVM for IBM z with SCSI
devices. There are only subtle changes when installing on ECKD devices, as described in
Appendix A, “Installing KVM for IBM z Systems with ECKD devices” on page 95.
46. 32 Getting Started with KVM for IBM z Systems
3.2.1 Preparing the .ins and .prm files
As described in “FTP server” on page 31, we had an FTP server to use for installing KVM for
IBM z. We created a directory structure that contained the .ins and .prm files needed for the
KVM for IBM z installer.
Example 3-2 shows the contents of the itso1.ins file, which is a copy of generic.prm file
provided in the DVD1 directory. Only the line pointing to itso1.prm was modified.
Example 3-2 itso1.ins
* for itsokvm1
images/kernel.img 0x00000000
images/initrd.img 0x02000000
images/itso1.prm 0x00010480
images/initrd.addrsize 0x00010408
Example 3-3 shows the itso1.prm file. It defines LUNs for the installer, network properties,
and the location of the FTP repository.
Example 3-3 itso1.prm
ro ramdisk_size=40000 rd.zfcp=0.0.b600,0x500507680120bc24,0x0000000000000000
rd.zfcp=0.0.b600,0x500507680120bc24,0x0001000000000000
rd.zfcp=0.0.b600,0x500507680120bc24,0x0002000000000000
rd.zfcp=0.0.b700,0x500507680120bb91,0x0000000000000000
rd.zfcp=0.0.b700,0x500507680120bb91,0x0001000000000000
rd.zfcp=0.0.b700,0x500507680120bb91,0x0002000000000000
rd.znet=qeth,0.0.2d00,0.0.2d01,0.0.2d02,layer2=1,portno=0,portname=DUMMY
ip=192.168.60.70::192.168.60.1:255.255.255.0:itsokvm1:enccw0.0.2d00:none
inst.repo=ftp://ftp:ftp@192.168.60.15/KVM/DVD1
Each rd.zfcp statement contains three parameters which, together, define a path to a
LUN. The first parameter defines the FCP device on the server side (actually, a device
from IOCDS). The second parameter defines the target WWPN, which is a WWPN of disk
storage. The third parameter provides a LUN number. This means that the rd.zfcp
statements in Example 3-3 define two different paths to each of three LUNs.
The rd.znet statement defines which device triplet is used as the NIC for an installer.
The ip statement defines the IP properties for the NIC.
The inst.repo statement defines the location of the install repositories for KVM for IBM z.
In our case, this is the read-only directory of a loop-mounted ISO image.
47. Chapter 3. Installing and configuring the environment 33
3.2.2 Installing KVM for IBM z
This section describes the steps for installing KVM for IBM z with SCSI devices.
Figure 3-3 shows two logical partitions: A25 and A2F. Both partitions are active without a
running operating system.
Figure 3-3 Two unused logical partitions
We installed KVM for IBM z using an FTP server.
Figure 3-4 shows how to invoke the Load from Removable Media, or Server panel by
selecting a target LPAR, clicking the small arrow icon next to its name, and selecting
Recovery and then Load from Removable Media, or Server task.
Figure 3-4 Invoke Load from Removable Media, or Server
48. 34 Getting Started with KVM for IBM z Systems
Figure 3-5 shows the window in which we provided the IP address of our FTP server, together
with FTP credentials. The file location field points to the directory where we put our .ins files
as described in 3.1.3, “Preparation tasks” on page 29.
Figure 3-5 Load from Removable Media, or Server
When the FTP server is contacted, a table listing all of the .ins files displays. We chose the
itso1.ins file, as shown in Figure 3-6. This file contains all the necessary information for
installing KVM for IBM z on our SCSI devices.
Figure 3-6 Select the Software to Install window
Load is a disruptive action, which requires a confirmation as shown in Figure 3-7.
Figure 3-7 Task confirmation dialog
49. Chapter 3. Installing and configuring the environment 35
It takes time to load the installer. To see what was happening on the server, we opened the
operating system messages panel. When the installer was ready, it printed a message
prompting us to open a Secure Shell (SSH) connection, as shown in Figure 3-8. Notice that
all installer panels use the ncurses interface:
Figure 3-8 Operating system messages
After opening an SSH session, a panel opens (see Figure 3-9 on page 35) from which you
can select the language:
Use the Tab key to move among fields
Use the Enter key and spacebar to press a button
You can switch between installer, shell, and the debug panels by using Ctrl-Right or
Ctrl-Left arrow keys at any time during the installation.
Figure 3-9 Welcome to KVM for IBM z
After accepting the International Program License Agreement, IBM and non-IBM Terms and
Conditions, and confirming that you want to install KVM for IBM z, the panel for selecting
disks for installation displays.
50. 36 Getting Started with KVM for IBM z Systems
Figure 3-10 shows the panel that displays the available LUNs. These are the three LUNs we
defined in the .prm file in 3.2.1, “Preparing the .ins and .prm files” on page 32. The LUNs are
recognized as multipathed devices. From this panel, it is not clear which mpath device
represents which LUN. Such information is useful for manual partitioning.
Figure 3-10 Devices to install KVM for IBM z to
To determine which mpath represents which LUN, we switched to shell using Ctrl-Right
Arrow. With the multipath command (see Example 3-4 on page 36) three interesting pieces
of information display:
mpathe represents LUN 0, mpatha represents LUN 1 and mpathf represents LUN 2.
On top of the two paths to each of our three LUNs specified in a parameter file, the
installer detected six additional available paths to each LUN.
Aside from the three LUNs specified in a parameter file, the installer discovered another
seven LUNs available to our LPAR.
Example 3-4 multipath output
[root@itsokvm1 ~]# multipath -l
mpathe (360050768018305e120000000000000ea) dm-4 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:3:0 sdr 65:16 active undef running
| |- 1:0:0:0 sde 8:64 active undef running
| |- 1:0:3:0 sdaa 65:160 active undef running
| `- 0:0:2:0 sda 8:0 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:4:0 sdaf 65:240 active undef running
|- 0:0:5:0 sdap 66:144 active undef running
|- 1:0:4:0 sdbi 67:192 active undef running
`- 1:0:5:0 sdbs 68:96 active undef running
mpathd (360050768018305e120000000000000f0) dm-3 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:2:6 sdi 8:128 active undef running
| |- 1:0:0:6 sdq 65:0 active undef running
| |- 0:0:3:6 sdab 65:176 active undef running
| `- 1:0:3:6 sdbe 67:128 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:4:6 sdal 66:80 active undef running
|- 0:0:5:6 sdav 66:240 active undef running
|- 1:0:4:6 sdbo 68:32 active undef running
`- 1:0:5:6 sdby 68:192 active undef running
51. Chapter 3. Installing and configuring the environment 37
mpathc (360050768018305e120000000000000ed) dm-2 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:4:3 sdai 66:32 active undef running
| |- 0:0:5:3 sdas 66:192 active undef running
| |- 1:0:4:3 sdbl 67:240 active undef running
| `- 1:0:5:3 sdbv 68:144 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:2:3 sdd 8:48 active undef running
|- 1:0:0:3 sdm 8:192 active undef running
|- 0:0:3:3 sdx 65:112 active undef running
`- 1:0:3:3 sdbb 67:80 active undef running
mpathb (360050768018305e120000000000000ee) dm-1 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:2:4 sdf 8:80 active undef running
| |- 1:0:0:4 sdo 8:224 active undef running
| |- 0:0:3:4 sdy 65:128 active undef running
| `- 1:0:3:4 sdbc 67:96 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:4:4 sdaj 66:48 active undef running
|- 0:0:5:4 sdat 66:208 active undef running
|- 1:0:4:4 sdbm 68:0 active undef running
`- 1:0:5:4 sdbw 68:160 active undef running
mpatha (360050768018305e120000000000000eb) dm-0 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:4:1 sdag 66:0 active undef running
| |- 0:0:5:1 sdaq 66:160 active undef running
| |- 1:0:4:1 sdbj 67:208 active undef running
| `- 1:0:5:1 sdbt 68:112 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:2:1 sdb 8:16 active undef running
|- 1:0:0:1 sdg 8:96 active undef running
|- 0:0:3:1 sdt 65:48 active undef running
`- 1:0:3:1 sdaz 67:48 active undef running
mpathj (360050768018305e120000000000000f2) dm-9 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 1:0:0:8 sdu 65:64 active undef running
| |- 0:0:3:8 sdad 65:208 active undef running
| |- 0:0:2:8 sdl 8:176 active undef running
| `- 1:0:3:8 sdbg 67:160 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:4:8 sdan 66:112 active undef running
|- 0:0:5:8 sdax 67:16 active undef running
|- 1:0:4:8 sdbq 68:64 active undef running
`- 1:0:5:8 sdca 68:224 active undef running
mpathi (360050768018305e120000000000000f3) dm-8 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:4:9 sdao 66:128 active undef running
| |- 0:0:5:9 sday 67:32 active undef running
| |- 1:0:4:9 sdbr 68:80 active undef running
| `- 1:0:5:9 sdcb 68:240 active undef running
52. 38 Getting Started with KVM for IBM z Systems
`-+- policy='service-time 0' prio=0 status=enabled
|- 1:0:0:9 sdw 65:96 active undef running
|- 0:0:2:9 sdn 8:208 active undef running
|- 0:0:3:9 sdae 65:224 active undef running
`- 1:0:3:9 sdbh 67:176 active undef running
mpathh (360050768018305e120000000000000f1) dm-7 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:4:7 sdam 66:96 active undef running
| |- 0:0:5:7 sdaw 67:0 active undef running
| |- 1:0:4:7 sdbp 68:48 active undef running
| `- 1:0:5:7 sdbz 68:208 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:2:7 sdj 8:144 active undef running
|- 0:0:3:7 sdac 65:192 active undef running
|- 1:0:0:7 sds 65:32 active undef running
`- 1:0:3:7 sdbf 67:144 active undef running
mpathg (360050768018305e120000000000000ef) dm-6 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:4:5 sdak 66:64 active undef running
| |- 0:0:5:5 sdau 66:224 active undef running
| |- 1:0:4:5 sdbn 68:16 active undef running
| `- 1:0:5:5 sdbx 68:176 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 1:0:0:5 sdp 8:240 active undef running
|- 0:0:2:5 sdh 8:112 active undef running
|- 0:0:3:5 sdz 65:144 active undef running
`- 1:0:3:5 sdbd 67:112 active undef running
mpathf (360050768018305e120000000000000ec) dm-5 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 1:0:0:2 sdk 8:160 active undef running
| |- 0:0:2:2 sdc 8:32 active undef running
| |- 0:0:3:2 sdv 65:80 active undef running
| `- 1:0:3:2 sdba 67:64 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:4:2 sdah 66:16 active undef running
|- 0:0:5:2 sdar 66:176 active undef running
|- 1:0:4:2 sdbk 67:224 active undef running
`- 1:0:5:2 sdbu 68:128 active undef running
53. Chapter 3. Installing and configuring the environment 39
Example 3-5 shows the output confirming that only three LUNs are configured for use, as
specified in the parameter file, although 10 LUNs were discovered.
Example 3-5 lszfcp output
[root@itsokvm1 ~]# lszfcp -D
0.0.b600/0x500507680120bc24/0x0000000000000000 0:0:0:0
0.0.b600/0x500507680120bc24/0x0001000000000000 0:0:0:1
0.0.b600/0x500507680120bc24/0x0002000000000000 0:0:0:2
0.0.b600/0x500507680130bc24/0x0000000000000000 0:0:1:0
0.0.b600/0x500507680130bc24/0x0001000000000000 0:0:1:1
0.0.b600/0x500507680130bc24/0x0002000000000000 0:0:1:2
0.0.b600/0x500507680120bb91/0x0000000000000000 0:0:2:0
0.0.b600/0x500507680120bb91/0x0001000000000000 0:0:2:1
0.0.b600/0x500507680120bb91/0x0002000000000000 0:0:2:2
0.0.b600/0x500507680130bb91/0x0000000000000000 0:0:3:0
0.0.b600/0x500507680130bb91/0x0001000000000000 0:0:3:1
0.0.b600/0x500507680130bb91/0x0002000000000000 0:0:3:2
0.0.b700/0x500507680120bc24/0x0000000000000000 1:0:0:0
0.0.b700/0x500507680120bc24/0x0001000000000000 1:0:0:1
0.0.b700/0x500507680120bc24/0x0002000000000000 1:0:0:2
0.0.b700/0x500507680130bc24/0x0000000000000000 1:0:1:0
0.0.b700/0x500507680130bc24/0x0001000000000000 1:0:1:1
0.0.b700/0x500507680130bc24/0x0002000000000000 1:0:1:2
0.0.b700/0x500507680120bb91/0x0000000000000000 1:0:2:0
0.0.b700/0x500507680120bb91/0x0001000000000000 1:0:2:1
0.0.b700/0x500507680120bb91/0x0002000000000000 1:0:2:2
0.0.b700/0x500507680130bb91/0x0000000000000000 1:0:3:0
0.0.b700/0x500507680130bb91/0x0001000000000000 1:0:3:1
0.0.b700/0x500507680130bb91/0x0002000000000000 1:0:3:2
Figure 3-11 shows that we selected all three configured LUNs that KVM for IBM z will be
installed on. In this panel, we can define additional devices if needed.
Figure 3-11 Selected devices
54. 40 Getting Started with KVM for IBM z Systems
Figure 3-12 shows the panel in which we can select automatic or manual partitioning. For our
installation, we chose automatic partitioning because we did not have any particular
requirements for the system layout.
Figure 3-12 Select partition method
Figure 3-13 shows the partition summary panel.
Figure 3-13 Partition summary panel
Next, we chose the time zone as depicted in Figure 3-14.
Figure 3-14 Time zone selection
55. Chapter 3. Installing and configuring the environment 41
In most installations, it is a required to have a common time source among all components in
the IT environment. The IBM z Systems platform uses Server Time Protocol (STP) as its time
source provider, so we did not enable NTP servers as shown in Figure 3-15.
Figure 3-15 NTP configuration
Figure 3-16 shows the panel for network configuration. A NIC named enccw0.0.2d00 was
already set online by the installer. This NIC was specified in the parameter file that is
described in 3.2.1, “Preparing the .ins and .prm files” on page 32. If no network was specified
in the parameter file, or if we needed to configure another card, this panel would have allowed
it. We decided to check whether the IP information for the NIC was set as specified in the
parameter file.
Figure 3-16 Configure network
Figure 3-17 shows the configuration of the enccw0.0.2d00 NIC. All of the parameters were
correctly read from the parameter file, and no changes were needed.
Figure 3-17 Network device configuration
56. 42 Getting Started with KVM for IBM z Systems
We did not need to configure another NIC, so we went to the next panel, as shown in
Figure 3-18.
Figure 3-18 Configure network
Figure 3-19 shows the DNS configuration panel. The value in the Hostname field was read
from the parameter file. We did not provide any other DNS parameters because they were not
needed in our environment.
Figure 3-19 DNS configuration
Figure 3-20 shows the installation summary.
Figure 3-20 Installation summary
57. Chapter 3. Installing and configuring the environment 43
If there were existing partitions or volume groups, the panel shown in Figure 3-21 would
inform us that they were going to be removed.
Figure 3-21 Partitions and LVMs to be removed
After pressing Ok, the installation begins. The progress bar shown in Figure 3-22 reports the
installation status.
Figure 3-22 Installation progress
After the installation process is finished, the panel shown in Figure 3-23 opens. After a reboot,
KVM for IBM z Systems is ready for use.
Figure 3-23 Reboot after installation
3.2.3 Configuring KVM for IBM z
This section describes several additional tasks we needed to perform in our environment after
KVM for IBM z was installed.
“Identifying out IPL device”
“Applying maintenance” on page 45
“Defining NICs” on page 46
“Defining Open vSwitches” on page 48
“Adding LUNs” on page 50
58. 44 Getting Started with KVM for IBM z Systems
Identifying out IPL device
During the installation we used automatic partitioning, and we had no control over which LUN
was to be used as the initial program load (IPL) device. Example 3-6 shows that the /boot
mount point resides on device 360050768018305e120000000000000ec.
Example 3-6 Find /boot device
[root@itsokvm1 ~]# mount |grep boot
/dev/mapper/360050768018305e120000000000000ec1 on /boot type ext4
(rw,relatime,seclabel,data=ordered)
Example 3-7 shows the output from the multipath command. It shows that device
360050768018305e120000000000000ec maps to LUN 2.
Example 3-7 multipath output
[root@itsokvm1 ~]# multipath -l
360050768018305e120000000000000ec dm-0 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:0:2 sdd 8:48 active undef running
| |- 0:0:1:2 sda 8:0 active undef running
| |- 1:0:0:2 sdf 8:80 active undef running
| `- 1:0:1:2 sdh 8:112 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:2:2 sdb 8:16 active undef running
|- 0:0:3:2 sdc 8:32 active undef running
|- 1:0:2:2 sdi 8:128 active undef running
`- 1:0:3:2 sdj 8:144 active undef running
360050768018305e120000000000000eb dm-6 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:2:1 sdr 65:16 active undef running
| |- 0:0:3:1 sdt 65:48 active undef running
| |- 1:0:2:1 sdv 65:80 active undef running
| `- 1:0:3:1 sdx 65:112 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:0:1 sdw 65:96 active undef running
|- 0:0:1:1 sdq 65:0 active undef running
|- 1:0:0:1 sds 65:32 active undef running
`- 1:0:1:1 sdu 65:64 active undef running
360050768018305e120000000000000ea dm-1 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:0:0 sdn 8:208 active undef running
| |- 0:0:1:0 sde 8:64 active undef running
| |- 1:0:0:0 sdk 8:160 active undef running
| `- 1:0:1:0 sdm 8:192 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:2:0 sdg 8:96 active undef running
|- 0:0:3:0 sdl 8:176 active undef running
|- 1:0:2:0 sdp 8:240 active undef running
`- 1:0:3:0 sdo 8:224 active undef running
59. Chapter 3. Installing and configuring the environment 45
Figure 3-24 shows how to IPL KVM for IBM z from the correct LUN when needed.
Figure 3-24 Load window
Applying maintenance
At the time of writing, Fix Pack 1 (FP1) was available from
http://www.ibm.com/support/fixcentral/
After downloading the code, we followed the steps provided in the README file, which
accompanied FP1. Example 3-8 shows the commands that we executed, as instructed.
Example 3-8 Applying fixes
[root@itsokvm1 ~]# ll
total 152360
-rw-r--r--. 1 root root 156010496 Sep 22 11:11 KVMIBM-1.1.0.1-20150911-s390x.iso
-rw-r--r--. 1 root root 3260 Sep 22 11:11 README
[root@itsokvm1 ~]# mkdir -p /mnt/FIXPACK
[root@itsokvm1 ~]# mount -o ro,loop KVMIBM-1.1.0.1-20150911-s390x.iso /mnt/FIXPAC
CK/
[root@itsokvm1 ~]# ls -l /mnt/FIXPACK/
total 41
dr-xr-xr-x. 2 1055 1055 2048 Sep 10 18:00 apar_db
-r-xr-xr-x. 1 1055 1055 33836 Sep 10 18:00 ibm_apar.sh
-r--r--r--. 1 1055 1055 3266 Sep 10 18:00 README
dr-xr-xr-x. 4 1055 1055 2048 Sep 10 18:00 Updates
[root@itsokvm1 ~]# cd /mnt/FIXPACK
[root@itsokvm1 FIXPACK]# ./ibm_apar.sh -y /mnt/FIXPACK/Updates/
Generating local repository to /mnt/FIXPACK/Updates/ ..
fixpack.repo :
[FIXPACK]
name=IBM FixPack ISO
baseurl=file:///mnt/FIXPACK/Updates/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-KVM-FOR-IBM
60. 46 Getting Started with KVM for IBM z Systems
Copy fixpack.repo to /etc/yum.repos.d/ ? [y/N]y
/tmp//fixpack.repo -> /etc/yum.repos.d/fixpack.repo
Installation of REPO FIXPACK successful
[root@itsokvm1 FIXPACK]# ./ibm_apar.sh -a
Fetching packages from yum...
Creating APAR dependency list...
Analysing the available APAR against installed rpms
APAR | Status | Subject
-------------------------------------------------------------
ZZ00466 | NONE | FP1 fix collection (128088)
[root@itsokvm1 FIXPACK]# ./ibm_apar.sh -i latest
Found latest available APAR: ZZ00466
...
Do you want to continue with installation [y/N]y
Clean expirable cache files..
...
Total download size: 147 M
Is this ok [y/d/N]: y
Downloading packages:
...
Complete!
Processing done.
[root@itsokvm1 FIXPACK]# ./ibm_apar.sh -a
Fetching packages from yum...
Creating APAR dependency list...
Analysing the available APAR against installed rpms
APAR | Status | Subject
-------------------------------------------------------------
ZZ00466 | APPLIED | FP1 fix collection (128088)
[root@itsokvm1 FIXPACK]# reboot
Defining NICs
As described in 3.1, “Our configuration” on page 28, our environment needed more than one
NIC to support two different LANs for virtual servers, each LAN connected through a bonding
interface. Our image contains only one NIC, as shown in Example 3-9. It is a NIC that
provides access to KVM for IBM z.
Example 3-9 Checking configured NICs
[root@itsokvm1 ~]# znetconf -c
Device IDs Type Card Type CHPID Drv. Name
State
--------------------------------------------------------------------------------
0.0.2d00,0.0.2d01,0.0.2d02 1731/01 OSD_1000 04 qeth enccw0.0.2d00
online
61. Chapter 3. Installing and configuring the environment 47
Example 3-10 shows a list of unconfigured NICs available to our environment.
Example 3-10 Checking available NICs
[root@itsokvm1 ~]# znetconf -u
Scanning for network devices...
Device IDs Type Card Type CHPID Drv.
------------------------------------------------------------
0.0.2d03,0.0.2d04,0.0.2d05 1731/01 OSA (QDIO) 04 qeth
0.0.2d06,0.0.2d07,0.0.2d08 1731/01 OSA (QDIO) 04 qeth
0.0.2d09,0.0.2d0a,0.0.2d0b 1731/01 OSA (QDIO) 04 qeth
0.0.2d0c,0.0.2d0d,0.0.2d0e 1731/01 OSA (QDIO) 04 qeth
0.0.2d20,0.0.2d21,0.0.2d22 1731/01 OSA (QDIO) 05 qeth
0.0.2d23,0.0.2d24,0.0.2d25 1731/01 OSA (QDIO) 05 qeth
0.0.2d26,0.0.2d27,0.0.2d28 1731/01 OSA (QDIO) 05 qeth
0.0.2d29,0.0.2d2a,0.0.2d2b 1731/01 OSA (QDIO) 05 qeth
0.0.2d2c,0.0.2d2d,0.0.2d2e 1731/01 OSA (QDIO) 05 qeth
0.0.2d40,0.0.2d41,0.0.2d42 1731/01 OSA (QDIO) 06 qeth
0.0.2d43,0.0.2d44,0.0.2d45 1731/01 OSA (QDIO) 06 qeth
0.0.2d46,0.0.2d47,0.0.2d48 1731/01 OSA (QDIO) 06 qeth
0.0.2d49,0.0.2d4a,0.0.2d4b 1731/01 OSA (QDIO) 06 qeth
0.0.2d4c,0.0.2d4d,0.0.2d4e 1731/01 OSA (QDIO) 06 qeth
0.0.2d60,0.0.2d61,0.0.2d62 1731/01 OSA (QDIO) 07 qeth
0.0.2d63,0.0.2d64,0.0.2d65 1731/01 OSA (QDIO) 07 qeth
0.0.2d66,0.0.2d67,0.0.2d68 1731/01 OSA (QDIO) 07 qeth
0.0.2d69,0.0.2d6a,0.0.2d6b 1731/01 OSA (QDIO) 07 qeth
As shown in Figure 3-2 on page 29, we chose to use devices 2d03, 2d23, 2d43, and 2d63 to
connect our Open vSwitch bridges to the LAN. The devices need to be configured as Layer 2
devices, and they need to be able to provide bridging functions.
We configured them with the required parameters and confirmed that the needed devices
were online, as shown in Example 3-11.
Example 3-11 Configuring NICs online
[root@itsokvm1 ~]# znetconf -a 2d03 -o layer2=1 -o bridge_role=primary
Scanning for network devices...
Successfully configured device 0.0.2d03 (enccw0.0.2d03)
[root@itsokvm1 ~]# znetconf -a 2d23 -o layer2=1 -o bridge_role=primary
Scanning for network devices...
Successfully configured device 0.0.2d23 (enccw0.0.2d23)
[root@itsokvm1 ~]# znetconf -a 2d43 -o layer2=1 -o bridge_role=primary
Scanning for network devices...
Successfully configured device 0.0.2d43 (enccw0.0.2d43)
[root@itsokvm1 ~]# znetconf -a 2d63 -o layer2=1 -o bridge_role=primary
Scanning for network devices...
Successfully configured device 0.0.2d63 (enccw0.0.2d63)
[root@itsokvm1 ~]# znetconf -c
Device IDs Type Card Type CHPID Drv. Name
State
--------------------------------------------------------------------------------
0.0.2d00,0.0.2d01,0.0.2d02 1731/01 OSD_1000 04 qeth enccw0.0.2d00
online
0.0.2d03,0.0.2d04,0.0.2d05 1731/01 OSD_1000 04 qeth enccw0.0.2d03
online
62. 48 Getting Started with KVM for IBM z Systems
0.0.2d23,0.0.2d24,0.0.2d25 1731/01 OSD_1000 05 qeth enccw0.0.2d23
online
0.0.2d43,0.0.2d44,0.0.2d45 1731/01 OSD_1000 06 qeth enccw0.0.2d43
online
0.0.2d63,0.0.2d64,0.0.2d65 1731/01 OSD_1000 07 qeth enccw0.0.2d63
online
Example 3-12shows a test of bridging capabilities of the newly configured NICs.
Example 3-12 Check bridging capabilities
[root@itsokvm1 ~]# cat /sys/class/net/enccw0.0.2d03/device/bridge_state
active
[root@itsokvm1 ~]# cat /sys/class/net/enccw0.0.2d23/device/bridge_state
active
[root@itsokvm1 ~]# cat /sys/class/net/enccw0.0.2d43/device/bridge_state
active
[root@itsokvm1 ~]# cat /sys/class/net/enccw0.0.2d63/device/bridge_state
active
We brought the NICs up online dynamically. These changes will not be persistent at system
restart. To make changes persistent, there must be corresponding ifcfg-enccw0.0.2dx3 files
in /etc/sysconfig/network-scripts directory.
An example of such a file is shown in Example 3-13. There must be a corresponding file
created for each NIC, or four files in our case.
Example 3-13 Make changes permanent
[root@itsokvm1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-enccw0.0.2d03
TYPE=Ethernet
BOOTPROTO=none
NAME=enccw0.0.2d03
DEVICE=enccw0.0.2d03
ONBOOT=yes
NETTYPE=qeth
SUBCHANNELS="0.0.2d03,0.0.2d04,0.0.2d05"
OPTIONS="layer2=1 bridge_reflect_promisc=primary buffer_count=128"
Defining Open vSwitches
As described in 3.1, “Our configuration” on page 28, we needed to create two Open
vSwitches (which shows as OVS in our examples). For KVM for IBM z to handle OVS, the
openvswitch service must be running. This service is not enabled by default. Example 3-14
shows the commands to check whether service is running, enable the service to be started
after a system restart, start the service dynamically, and check the status after the service is
started.
Example 3-14 openswitch service
[root@itsokvm1 ~]# ovs-vsctl show
ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such
file or directory)
[root@itsokvm1 ~]# systemctl status openvswitch
openvswitch.service - Open vSwitch
Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; disabled)
Active: inactive (dead)