Redbooks
Front cover
Getting Started with
KVM for IBM z Systems
Bill White
Tae Min Baek
Mark Ecker
Marian Gasparovic
Manoj S Pattabhiraman
International Technical Support Organization
Getting Started with KVM for IBM z Systems
November 2015
SG24-8332-00
© Copyright International Business Machines Corporation 2015. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
First Edition (November 2015)
This edition applies to Version 1, Release 1, Modification 0 of KVM for IBM z Systems (product number
5648-KVM).
Note: Before using this information and the product it supports, read the information in “Notices” on page v.
© Copyright IBM Corp. 2015. All rights reserved. iii
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
IBM Redbooks promotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Authors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Now you can become a published author, too . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Chapter 1. KVM for IBM z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Why KVM for IBM z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Advantages of using KVM for IBM z Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 IBM z Systems and KVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Storage connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Network connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.3 Hardware Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.4 Open source virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.5 What comes with KVM for IBM z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Managing the KVM for IBM z Systems environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.1 IBM z Systems Hypervisor Performance Manager (zHPM) . . . . . . . . . . . . . . . . . . 9
1.4 Using IBM Cloud Manager with OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Chapter 2. Planning the environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1 Planning KVM for IBM z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.1 Hardware requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.2 Software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.3 Installation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 Planning virtualized resources for KVM virtual machines . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.1 Compute consideration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.2 Storage consideration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.3 Network consideration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.4 Software consideration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.5 Live migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3 Planning KVM virtual machine management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4 Planning a cloud infrastructure with KVM and
IBM Cloud Manager with OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.1 Planning for KVM for IBM z Systems installation . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.2 Planning for virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4.3 Planning for IBM Cloud Manager with OpenStack installation . . . . . . . . . . . . . . . 22
2.4.4 Planning for IBM Cloud Manager with OpenStack deployment . . . . . . . . . . . . . . 24
Chapter 3. Installing and configuring the environment. . . . . . . . . . . . . . . . . . . . . . . . . 27
3.1 Our configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.1.1 Logical view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.1.2 Physical resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.1.3 Preparation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2 Setting up KVM for IBM z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
iv Getting Started with KVM for IBM z Systems
3.2.1 Preparing the .ins and .prm files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2.2 Installing KVM for IBM z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2.3 Configuring KVM for IBM z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.3 Deploying virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.3.1 Preparing the environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.3.2 Installing Linux on z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.3.3 Modifying domain definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.3.4 Linux on z Systems configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Chapter 4. Managing and monitoring the environment. . . . . . . . . . . . . . . . . . . . . . . . . 65
4.1 KVM on IBM z System management interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.1.1 Introduction to the libvirt management stack. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.2 Using virsh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.2.1 Basic commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.2.2 Add I/O resources dynamically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.2.3 VM live migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.3 Monitoring KVM for IBM z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.3.1 Configuring the Nagios monitoring tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Chapter 5. Building a cloud environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.1 Overview of IBM Cloud Manager with OpenStack V4.3 . . . . . . . . . . . . . . . . . . . . . . . . 78
5.1.1 IBM Cloud Manager with OpenStack version 4.3 . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.1.2 Environmental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.2 Installing, deploying, and configuring KVM on a cloud based on IBM z Systems. . . . . 81
5.2.1 Installing and update IBM Cloud Manager with OpenStack V4.3 . . . . . . . . . . . . . 81
5.2.2 Deploying the IBM Cloud Manager topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.2.3 Creating a cloud environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.2.4 Environment templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.2.5 Creating a controller topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.2.6 Creating a compute node topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.2.7 Cloud environment verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.2.8 Accessing IBM Cloud Manager 4.3 with OpenStack. . . . . . . . . . . . . . . . . . . . . . . 91
Appendix A. Installing KVM for IBM z Systems with ECKD devices . . . . . . . . . . . . . . 95
Parameter file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Appendix B. Installing IBM Cloud Manager with OpenStack . . . . . . . . . . . . . . . . . . . . 97
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Yum repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Host name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Security-Enhanced Linux (SELinux) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Network Time Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Installing IBM Cloud Manager 4.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Applying IBM Cloud Manager with OpenStack 4.3 fix packs . . . . . . . . . . . . . . . . . . . . 101
Appendix C. Basic setup and use of zHPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
© Copyright IBM Corp. 2015. All rights reserved. v
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
vi Getting Started with KVM for IBM z Systems
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
DB2®
DS8000®
ECKD™
FICON®
FlashSystem™
Global Business Services®
IBM®
IBM FlashSystem®
IBM z™
IBM z Systems™
IBM z13™
PR/SM™
Processor Resource/Systems
Manager™
Redbooks®
Redbooks (logo) ®
Storwize®
System z®
XIV®
z Systems™
z/OS®
z/VM®
z13™
The following terms are trademarks of other companies:
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
IBM REDBOOKS PROMOTIONS
Find and read thousands of
IBM Redbooks publications
Search, bookmark, save and organize favorites
Get up-to-the-minute Redbooks news and announcements
Link to the latest Redbooks blogs and videos
Download
Now
Get the latest version of the Redbooks Mobile App
iOS
Android
Place a Sponsorship Promotion in an IBM
Redbooks publication, featuring your business
or solution with a link to your web site.
Qualified IBM Business Partners may place a full page
promotion in the most popular Redbooks publications.
Imagine the power of being seen by users who download
millions of Redbooks publications each year!
®
®
Promote your business
in an IBM Redbooks
publication
ibm.com/Redbooks
About Redbooks Business Partner Programs
IBM Redbooks promotions
THIS PAGE INTENTIONALLY LEFT BLANK
© Copyright IBM Corp. 2015. All rights reserved. ix
Preface
This IBM® Redbooks® publication gives a broad explanation of the kernel-based virtual
machine (KVM) for IBM z™ Systems and how it uses the architecture of IBM z Systems™. It
focuses on the planning and design of the environment and provides installation and
configuration definitions that are necessary to build and manage KVM for IBM z Systems. It
also helps you plan, install, and configure IBM Cloud Manager with OpenStack for use with
KVM for IBM z Systems in a cloud environment.
This book is useful to IT architects and system administrators who plan for and install KVM for
IBM z Systems. The reader is expected to have a good understanding of IBM z Systems
hardware, KVM, Linux on z Systems, and cloud concepts.
Authors
This book was produced by a team of specialists from around the world working at the
IBM International Technical Support Organization, Poughkeepsie Center.
Bill White is a Project Leader and Senior z Systems Networking and Connectivity Specialist
at IBM Redbooks, Poughkeepsie Center.
Tae Min Baek is a Certified IT Architect for IBM Systems mardware in Korea. He has 16
years of experience in z Systems virtualization, IBM z/OS®, IBM z/VM®, and Linux operating
systems. Currently, he works in Technical Sales for Linux on z Systems and as a benchmark
center leader in Korea. He also provides technical support for Linux on z Systems cloud
solutions, porting local ISV solutions, the PoC/benchmark test, and the implementation
project.
Mark Ecker is a certified z Systems Client Technical Specialist in the United States. He has
worked for IBM for 17 years in the z Systems field. His areas of expertise include capacity
planning, solution design, and deep knowledge of the z Systems platform. Mark is also a
co-author of IBM Enterprise Workload Manager V2.1, SG24-6785
Marian Gasparovic is an IT Specialist working for the IBM Systems Group in IBM Slovakia.
After working as a z/OS administrator with an IBM Business Partner, he joined IBM as a
storage specialist. Later, he worked as a Field Technical Sales Specialist and was responsible
for new workloads. He joined Systems Lab Services and Training in 2010. His main area of
expertise is virtualization on z Systems. He is a co-author of several IBM Redbooks
publications.
Manoj S Pattabhiraman is an IBM Certified Senior IT Specialist from the IBM Benchmarking
Center, Singapore. He has more than 14 years of experience in IBM System z® virtualization,
cloud, and Linux on System z. In his current role, he leads the System z benchmarking team
in Singapore and also provides consultation and implementation services for various Linux on
System z customers across ASEAN region. Manoj has contributed to several z/VM and Linux
on System z related IBM Redbooks publications, and has been a frequent presenter at
various technical conferences and workshops on z/VM and Linux on System z.
Thanks to the following people for their contributions to this project:
Ella Buslovich and Karen Lawrence
IBM Redbooks
x Getting Started with KVM for IBM z Systems
Dave Bennin, Don Brennan, Rich Conway, and Bob Haimowitz
IBM Global Business Services®, Development Support Team
Zhuo Hua Li and Hong Jin Wei
IBM China
Klaus Smolin, Tony Gargya, and Viktor Mihajlovski
IBM Germany
Now you can become a published author, too
Here’s an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time. Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
will help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.
Find out more about the residency program, browse the residency index, and apply onlinet:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us.
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form:
ibm.com/redbooks
Send your comments by email:
redbooks@us.ibm.com
Mail your comments:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Stay connected to IBM Redbooks
Find us on Facebook:
http://www.facebook.com/IBMRedbooks
Follow us on Twitter:
http://twitter.com/ibmredbooks
Look for us on LinkedIn:
Preface xi
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
xii Getting Started with KVM for IBM z Systems
© Copyright IBM Corp. 2015. All rights reserved. 1
Chapter 1. KVM for IBM z Systems
This chapter is an introduction to open virtualization with KVM for IBM z Systems and a
description of how the environment can be managed. It covers the following topics:
Why KVM for IBM z Systems
IBM z Systems and KVM
Managing the KVM for IBM z Systems environment
Using IBM Cloud Manager with OpenStack
1
Terminology: The terms virtual server and virtual machine are interchangeable. Both
terms are use throughout this book, depending on the component being discussed.
2 Getting Started with KVM for IBM z Systems
1.1 Why KVM for IBM z Systems
Today’s systems must be able to scale up and scale out, not only in terms of performance and
size, but also in functions. Virtualization is a core enabler of system capability, but open
source and standards are key to making virtualization effective.
KVM for IBM z Systems is an open source virtualization option for running Linux-centric
workloads, using common Linux-based tools and interfaces, while taking advantage of the
robust scalability, reliability, and security that is inherent to the IBM z Systems platform. The
strengths of the z Systems platform have been developed and refined over several decades
to provide additional value to any type of IT-based services.
KVM for IBM z Systems can manage and administer multiple virtual machines, allowing for
large numbers of Linux-based workloads to run simultaneously on the z Systems platform.
z Systems platforms also have a long history of providing security for applications and
sensitive data in virtual environments. It is the most securable platform in the industry, with
security integrated throughout the stack in hardware, firmware, and software.
1.1.1 Advantages of using KVM for IBM z Systems
KVM for IBM z Systems offers enterprises a cost-effective alternative to other hypervisors. It
has simple and familiar standard user interfaces, offering easy integration of the z Systems
platform into any IT infrastructure.
KVM for IBM z Systems can be managed to allow for over-commitment of system resources
to optimize the virtualized environment. This is described in 2.2.1, “Compute consideration”
on page 14.
In addition, KVM for IBM z Systems can help make platform mobility easier. Its live relocation
capabilities enable you to move virtual machines and workloads between multiple instances
of KVM for IBM z Systems without incurring downtime.
Table 1-1 lists some of the key features and benefits of KVM for IBM z Systems.
Note: Both KVM for IBM z Systems and Linux on z Systems are the same KVM and Linux
that run on other hardware platforms, with the same look and feel.
Chapter 1. KVM for IBM z Systems 3
Table 1-1 KVM for IBM z Systems key features
1.2 IBM z Systems and KVM
The z Systems platform is highly virtualized, with the goal of maximizing the use of compute
and I/O (storage and network) resources, and simultaneously lowering the total amount of
resources needed for your workloads. For decades, virtualization has been embedded in
z Systems architecture and built into the hardware and firmware.
Virtualization requires a hypervisor, which manages resources that are required for multiple
independent virtual machines. Hypervisors can be implemented in software or hardware, and
z Systems has both. The hardware hypervisor is known as IBM Processor Resource/Systems
Manager™ (PR/SM™). PR/SM is implemented in firmware as part of the base system. It fully
virtualizes the system resources and does not require additional software to run. KVM for
IBM z is a software hypervisor that uses PR/SM functions to service its virtual machines.
PR/SM enables defining and managing subsets of the z Systems resources in logical
partitions (LPARs). Each KVM for IBM z instance runs in a dedicated LPAR. The LPAR
definition includes several logical processing units (LPUs), memory, and I/O resources. LPUs
are defined and managed by PR/SM and are perceived by KVM for IBM z as real CPUs.
PR/SM is responsible for accepting requests for work on LPUs and dispatching that work on
physical CPUs. LPUs can be dynamically added to and removed from an LPAR. LPARs can
be added, modified, activated, or deactivated in z Systems platforms using the Hardware
Management Console (HMC).
Feature Benefits
KVM hypervisor Supports running multiple disparate Linux virtual
machines on a single system
CPU sharing Allows for the sharing of CPU resources by virtual
machines
I/O sharing Enables the sharing of I/O resources among virtual
machines
Memory and CPU over-commitment Supports the over-commitment of CPU, memory,
and swapping of inactive memory
Live virtual machine relocation Enables workload migration with minimal impact
Dynamic addition and deletion of virtual I/O
devices
Reduces downtime to modify I/O device
configurations for virtual machines
Thin-provisioned virtual machines Allows for copy-on-write virtual disks to save on
storage
Hypervisor performance management Supports policy based, goal-oriented management
and monitoring of virtual CPU resources
Installation and configuration tools Supplies tools to install and configure KVM for IBM
z Systems
Transactional execution use Provides improved performance for running
multi-threaded applications
4 Getting Started with KVM for IBM z Systems
KVM for IBM z Systems also uses PR/SM to access storage devices and the network for
Linux on z Systems virtual machines (see Figure 1-1).
Figure 1-1 KVM running in z Systems LPARs
1.2.1 Storage connectivity
Storage connectivity is provided on the z Systems platforms by host bus adapters (HBAs)
called Fibre Connection (IBM FICON®) features. IBM FICON (FICON Express16S and
FICON Express8S) features follow Fibre Channel (FC) standards. They support data storage
and access requirements and the latest FC technology in storage devices.
The FICON features support the following protocols:
Native FICON
An enhanced protocol (over FC) that provides for communication with FICON devices,
such as disks, tapes, and printers. Native FICON supports IBM Extended Count Key Data
(ECKD™) devices.
Fibre Channel Protocol (FCP)
A standard protocol for communicating with disk and tape devices. FCP supports small
computer system interface (SCSI) devices.
Linux on z Systems and KVM for IBM z Systems can use both protocols by using the FICON
features.
Chapter 1. KVM for IBM z Systems 5
1.2.2 Network connectivity
Network connectivity is provided on the z Systems platform by the network interface cards
(NICs) called Open Systems Adapter (OSA) features. The OSA features (OSA-Express5S,
OSA-Express4S, and OSA-Express3) provide direct, industry-standard local area network
(LAN) connectivity and communication in a networking infrastructure.
OSA features use the z Systems I/O architecture, called queued direct input/output (QDIO).
QDIO is a highly efficient data transfer mechanism that uses system memory queues and a
signaling protocol to directly exchange data between the OSA microprocessor in the feature
and the network stack running in the operating system.
KVM for IBM z Systems can use the OSA features by virtualizing them for Linux on z Systems
to use.
For more information about storage and network connectivity for Linux on z Systems, see
TThe Virtualization Cookbook for IBM z Systems Volume 3: SUSE Linux Enterprise Server
12, SG24-8890:
http://www.redbooks.ibm.com/abstracts/sg248890.html
1.2.3 Hardware Management Console
The Hardware Management Console (HMC) is a stand-alone computer that runs a set of
management applications. The HMC is a closed system, which means that no other
applications can be installed on it.
The HMC can set up, manage, monitor, and operate one or more z Systems platforms. It
manages and provides support utilities for the hardware and its LPARs.
The HMC is used to install KVM for IBM z Systems and to provide an interface to the IBM z
Systems hardware for configuration management functions.
For details about the HMC, see Introduction to the Hardware Management Console in the
IBM Knowledge Center:
http://ibm.co/1PD5gFi
1.2.4 Open source virtualization
Kernel-based virtual machine (KVM) technology is a cross-platform virtualization technology
that turns the Linux kernel into an enterprise-class hypervisor by using the hardware
virtualization support built into the z Systems platform. This means that KVM for IBM z
Systems can do things such as scheduling tasks, dispatching CPUs, managing memory, and
interacting with I/O resources (storage and network) through PR/SM.
KVM for IBM z Systems creates virtual machines as Linux processes that run Linux on
z Systems images using a modified version of another open source module, known as a quick
emulator (QEMU). QEMU provides I/O device emulation and device virtualization inside the
virtual machine.
The KVM for IBM z Systems kernel provides the core virtualized infrastructure. It can
schedule virtual machines on real CPUs and manage their access to real memory. QEMU
runs in a user space and implements virtual machines using KVM module functions.
6 Getting Started with KVM for IBM z Systems
QEMU virtualizes real storage and network resources for a virtual machine, which, in turn,
uses virtio drivers to access these virtualized resources, as shown in Figure 1-2.
Figure 1-2 Open source virtualization: KVM for IBM z Systems
The network interface in Linux on z Systems is a virtual Ethernet interface. The interface
name is eth. Multiple Ethernet interfaces can be defined to Linux and are handled by the
virtio_net device driver module.
In Linux, a generic virtual block device is used rather than specific devices, such as ECKD or
SCSI devices. The virtual block devices are handled by the virtio_blk device driver module.
For information about KVM, see KVM — an open cross-platform virtualization alternative, a
smarter choice:
http://www.ibm.com/systems/virtualization/kvm/
Browse KVM for IBM z Systems product publications in the IBM Knowledge Center:
http://www.ibm.com/support/knowledgecenter/linuxonibm/liaaf/lnz_r_kvm.html
Chapter 1. KVM for IBM z Systems 7
1.2.5 What comes with KVM for IBM z Systems
KVM for IBM z Systems provides standard Linux and KVM interfaces for operational control of
the environment, such as standard drivers and application programming interfaces (APIs), as
well as system emulation support and virtualization management. Included as part of KVM for
IBM z Systems are the following components:
The command-line interface (CLI) is a common, familiar Linux interface environment used
to issue commands and interact with the KVM hypervisor. The user issues a series of
successive lines of commands to change or control the environment.
Libvirt is open source software that resides on KVM and many other hypervisors to
provide low-level virtualization capabilities that interface with KVM through a CLI called
virsh. A list of key virsh commands is included in “Using virsh” on page 67.
The IBM z Systems Hypervisor Performance Manager (zHPM) monitors virtual machines
running on KVM to achieve goal-oriented policy-based performance goals (see
Appendix C, “Basic setup and use of zHPM” on page 103).
Open vSwitch (OVS) is open source software that allows for network communication
between virtual machines and the external networks that are hosted by the KVM
hypervisor. See this website for more information:
http://www.openvswitch.org
MacVTap is a device driver used to virtualize bridge networking and is based on the
mcvlan device driver. See this website for more information:
http://virt.kernelnewbies.org/MacVTap
QEMU is open source software that is a hardware emulator for virtual machines running
on KVM. It also provides management and monitoring functions for the KVM virtual
machines. For more information, see the QEMU.org wiki:
http://wiki.qemu.org
The installer offers a series of panels to assist and guide the user through the installation
process. Each panel has setting selections that can be made to customize the KVM
installation. See Chapter 3, “Installing and configuring the environment” on page 27 for
examples of the installer panels.
Nagios remote plug-in executor (NRPE) can be used with KVM for IBM z. NRPE is an
addon that allows you to execute plug-ins on KVM for IBM z. With those plug-ins, you can
monitor resources, such as disk usage, CPU load, and memory usage. For more
information, see “Configuring the Nagios monitoring tool” on page 64.
8 Getting Started with KVM for IBM z Systems
1.3 Managing the KVM for IBM z Systems environment
KVM for IBM z Systems integrates with standard OpenStack virtualization management,
which enables enterprises to easily integrate Linux servers into their infrastructure and cloud
offerings.
KVM for IBM z Systems supports libvirt APIs to enable CLIs (and custom scripting) to be used
to administer the hypervisor. KVM can be administered using open source tools, such as
virt-manager or OpenStack. KVM for IBM z Systems can also be administered and managed
by using IBM Cloud Manager with OpenStack (see Figure 1-3 on page 8). IBM Cloud
Manager is created and maintained by IBM and built on OpenStack.
Figure 1-3 KVM for IBM z Systems management interfaces
KVM for IBM z Systems can be managed just like any another KVM hypervisor by using the
Linux CLI. The Linux CLI provides a familiar experience for platform management.
In addition, an open source tool called Nagios can be used to monitor the KVM for IBM z
Systems environment.
Libvirt provides different methods of access through a layered approach, from a command
line called virsh in the libvirt tools layer to a low-level API for many programming languages
(see Figure 1-4).
Figure 1-4 KVM management via libvirt API layers
Hardware
Hypervisor layer
libvirtd
libvirt API layer
libvirt tools layer
Application layer
Chapter 1. KVM for IBM z Systems 9
The main component of the libvirt software is the libvirtd daemon. This is the component that
interacts directly with QEMU and the KVM kernel at the hypervisor layer. QEMU manages
and monitors the KVM virtual machines by performing the following tasks:
Manage the I/O between virtual machines and KVM
Create virtual disks
Change the state of a virtual machine:
– Start a virtual machine
– Stop a virtual machine
– Suspend a virtual machine
– Resume a virtual machine
– Delete a virtual machine
– Take and restore snapshots
See the libvirt website for more information about libvirt:
http://libvirt.org
1.3.1 IBM z Systems Hypervisor Performance Manager (zHPM)
zHPM monitors and manages workload performance of the virtual machines under KVM by
performing the following operations:
Detect when a virtual machine is not achieving its goals when it is a member of a
Workload Resource Group.
Determine whether the virtual machine performance can be improved with additional
resources.
Project the impact on all virtual machines of the reallocation of resources.
Redistribute processor resources if there is a good trade-off based on policy.
For more information, see Introduction to zHPM in the IBM Knowledge Center:
http://ibm.co/1japece
zHPM setup instructions and examples are in Appendix C, “Basic setup and use of zHPM” on
page 103.
1.4 Using IBM Cloud Manager with OpenStack
OpenStack is a cloud-based operating system that controls large pools of compute, storage,
and networking resources throughout a data center. It is based on the Open Stack project:
http://www.openstack.org/
IBM Cloud Manager with OpenStack is an advanced management solution that is created
and maintained by IBM and built on OpenStack. It can be used to get started with a cloud
environment and continue to scale with users and workloads, providing advanced resource
management with simplified cloud administration and full access to OpenStack APIs.
10 Getting Started with KVM for IBM z Systems
KVM for IBM z Systems compute nodes support the following OpenStack services:
Nova libvirt driver
Neutron agent for Open vSwitch
Ceilometer support
Cinder
OpenStack compute node has an abstraction layer for compute drivers to support different
hypervisors, including QEMU and KVM for IBM z Systems through the libvirt API layer (see
Figure 1-4 on page 8).
© Copyright IBM Corp. 2015. All rights reserved. 11
Chapter 2. Planning the environment
This chapter describes the planning activities to carry out before installing kernel-based
virtual machine (KVM) for IBM z Systems and before setting up virtual environments
managed by KVM. It also covers the available management tools and provides an overview of
a scenario that is implemented in this book as an example, along with the required checklists
for the scenario. The information in this chapter will assist you with all of these tasks.
This chapter includes the following sections:
Planning KVM for IBM z Systems
Planning virtualized resources for KVM virtual machines
Planning KVM virtual machine management
Planning a cloud infrastructure with KVM and IBM Cloud Manager with OpenStack
2
12 Getting Started with KVM for IBM z Systems
2.1 Planning KVM for IBM z Systems
The supported hardware and software need to be configured as described in this chapter
before installation of KVM for IBM z Systems. An installation method also needs to be
determined, as described in this section.
2.1.1 Hardware requirements
The supported servers, storage hardware, and network features described in the subsections
that follow need to be confirmed before the installation begins.
Servers
The following servers are supported only with regard to the Integrated Facilities for Linux
(IFLs) that are activated:
IBM z13™
IBM zEC12
IBM zBC12
Storage
KVM for IBM z Systems supports small computer system interface (SCSI) devices and
extended count key data (IBM ECKD) devices. You can use either SCSI or ECKD devices or
both. The following storage devices are supported:
SCSI devices:
– IBM XIV®
– IBM Storwize® V7000
– IBM FlashSystem™
– SAN Volume Controller
– IBM DS8000® (FCP attached)
ECKD devices:
– DS8000 (IBM FICON attached)
The Fibre Channel protocol (FCP) channel supports multiple switches and directors and can
be placed between the IBM z Systems server and the SCSI device. This can help to provide
more choices for storage solutions or the ability to use existing storage devices. ECDK
devices can help to manage disks efficiently because KVM and Linux do not have to manage
the I/O path or load balancing, because these are already managed by IBM z Systems
hardware. You can choose SCSI devices, ECKD devices, or both for the KVM environment.
Host bus adapters
The following FICON features support connectivity to both SCSI and ECKD devices:
FICON Express16S
FICON Express8S
Network interface cards
The following Open Systems Adapter (OSA) features are supported:
IBM OSA-Express5S
IBM OSA-Express4S
IBM OSA-Express3 (zEC12 and zBC12 only)
With this OSA feature, KVM for IBM z Systems does not support VLANs or flat networks
together with Open vSwitch1
.
Chapter 2. Planning the environment 13
Logical partitions (LPARs) for KVM
When you define and allocate resources to LPARs on which KVM is installed, consider CPU
and memory needs:
CPU
A minimum of 1 CPU (known as Integrated Facility for Linux, or IFL) must be assigned to
the KVM LPAR. The suggestion is to assign no more than 36 IFLs per KVM LPAR.
Memory
A maximum of 8 TB of RAM can be allocated per KVM LPAR. The suggestion is to
allocate no more than 1 TB of RAM per KVM LPAR.
For the IBM z Systems platform, your system must be at the proper firmware or microcode
level. At the time of writing, these were the appropriate levels:
For z13: N98805.010 D22H Bundle 20a
For zEC12 and zBC12: H49525.013 D15F Bundle 45a
For more information, search the Preventative Service Planning buckets web page:
http://www.software.ibm.com/webapp/set2/psp/srchBroker
Search for the following PSP hardware upgrade identifiers:
For the IBM z13, the PSP bucket is 2964DEVICE.
For the IBM zEC12, the PSP bucket is 2827DEVICE.
For the IBM zBC12, the PSP bucket is 2828DEVICE.
2.1.2 Software requirements
The following software resources are required:
KVM for IBM z Systems V1.1.0 (Product Number 5648-KVM)
KVM for IBM z Systems can be ordered and delivered electronically using the IBM Shopz:
http://www.ibm.com/software/ShopzSeries
After you download the ISO file from IBM Shopz, you can use it to install from an FTP
server or burn a DVD and use that for the installation.
The latest available Fix Pack for KVM for IBM z Systems
KVM for IBM z Systems 1.1.0.1 contains the current, cumulative fix packs. Download
these from IBM Fix Central:
http://www.ibm.com/support/fixcentral/
1 Open vSwitch is a multilayer virtual switch. For details, see this website: http://openvswitch.org/.
14 Getting Started with KVM for IBM z Systems
2.1.3 Installation methods
You can install KVM for IBM z Systems using either of the following methods:
From an FTP server, where the FTP server is in the same subnet as the Hardware
Management Console (HMC).
From a DVD (or a CD with a capacity of 800 MB or greater) that you create, containing the
install images. An FTP server is also required, but this method does not require the FTP
server to be in the same subnet as the IBM HMC. You will need to copy and create the
.ins and .prm files that correspond with your environment and burn them with the ISO
image to the physical DVD or CD.
More details about performing the installation from a DVD are available in KVM for IBM z
Systems: Planning and Installation Guide, SC27-8236-00 in the IBM Knowledge Center:
http://ibm.co/1Qxm1BW
The FTP server must be accessible from the target installation LPAR.
We chose the FTP server method of installation because it has more flexibility for creating
and updating the generic .prm file that is needed during installation. Before the installation,
we prepared the FTP server in our scenario to be in the same subnet as the HMC. Details of
the installation method from an FTP server are provided in Chapter 3, “Installing and
configuring the environment” on page 27.
2.2 Planning virtualized resources for KVM virtual machines
After installing KVM for IBM z Systems, you can plan and design the virtualized environments
to build (including CPU, memory, storage, and network) and run the virtual machines on KVM.
When adding virtual machines, you must create .xml files to define your virtual resources.
The following describes the consideration of virtual resources when you define the virtual
machines.
2.2.1 Compute consideration
The virtual CPUs and memory can be configured, and these are available for the defined
virtual machine using the vcpu and memory elements in the .xml file of your virtual machine.
KVM supports CPU and memory over-commitment. To maximize performance, it is
suggested that you define the minimum number of virtual CPUs and memory necessary for
each virtual machine. If you allocate more virtual CPUs to the virtual machines than are
needed, the system works, but this configuration can cause performance degradation as the
virtual machines increase in numbers. Consider these suggestions:
CPU:
– The suggested over-commit ratio of CPUs is 10:1 (virtual-to-real). The real CPUs in
this case are the IFLs assigned to the KVM LPAR.
– Do not define more virtual CPUs to a virtual machine than the number of IFLs assigned
to the KVM LPAR. The maximum number of virtual CPUs per virtual machine is 64.
Note: You must prepare your own FTP server and upload the ISO file for KVM for IBM z
Systems to the FTP server before installation. The installation method you select depends
on the subnet of FTP server.
Chapter 2. Planning the environment 15
Memory:
– The suggested over-commit ratio of memory is 2:1 (virtual-to-real).
You can configure the CPU weight of a virtual machine, and you can modify it during
operation. The CPU shares of a virtual machine are calculated by forming the weight-fraction
of the virtual machine. CPU weight is helpful for managing your virtual machines by priority or
server workload. Additional details and examples of CPU share are available under “CPU
management” in KVM Virtual Server Management, SC34-2752-00:
http://ibm.co/1PQkXHW
2.2.2 Storage consideration
KVM supports virtualization of several storage devices on a KVM LPAR. You can typically use
block devices or disk image files to connect with local storage devices on the virtual machine.
Block device
A virtual machine that uses block devices for local mass storage typically performs better than
a virtual machine that uses disk image files. The virtual machine that uses block devices
achieves lower-latency and higher throughput because it minimizes the number of software
layers through which it passes. Figure 2-1 shows the block devices that QEMU can use for
KVM virtual machines.
Figure 2-1 Block devices for KVM virtual machines
KVM
sda sdb
LUN 0001 LUN 0002
sdb1
sdb2
LUN 0003 LUN 0004
vm02-lv
SCSI
LPAR
QEMU QEMU
VM01
Linux
VM02
Linux
vda vdavdb vdb
sda sdb1 sdb2 vm02-lv
dasda dasdb
Device
6201
Device
6202
dasdb1
dasdb2
Device
6203
Device
6204
vm04-lv
ECKD
QEMU QEMU
VM03
Linux
VM04
Linux
vda vdavdb vdb
dasda dasdb1 dasdb2 vm04-lv
VolGroup01 VolGroup02
16 Getting Started with KVM for IBM z Systems
The following block devices are supported by QEMU:
Entire devices
A physical disk, such as SCSI and ECKD devices can be defined as a virtual disk of a
virtual machine. A virtual machine uses all of the physical disk space that it manages.
Example 2-1 shows a sample .xml file that defines a virtual disk for managing all of the
disk space of the physical devices that it manages.
Example 2-1 Sample .xml for entire devices of VM01
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/sda'/>
<target dev='vda' bus='virtio'/>
</disk>
Disk partitions
KVM for IBM z Systems can partition a physical disk. Each partition can be allocated to
the same or different virtual machines. This can help to use large physical disk space
more efficiently.
Example 2-2 shows a sample .xml file to define a virtual disk to use partitions.
Example 2-2 Sample .xml for disk partitions of VM01
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/sdb1'/>
<target dev='vdb' bus='virtio'/>
</disk>
Logical volume manager (LVM) logical volumes
KVM can create and manage logical volumes using LVM. This makes it easier to manage
the available storage in general, and it also makes it easier to back up your virtual
machines without shutting them down, thanks to LVM snapshots.
Example 2-3 shows a sample .xml file to define a virtual disk to use logical volumes.
Example 2-3 Sample .xml for logical volumes of VM02
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/VolGroup00/LogVol00'/>
<target dev='vda' bus='virtio'/>
</disk>
The following requirements must be considered when choosing to use block devices:
All block devices must be available and accessible to the hypervisor. The virtual machine
cannot access devices that are not available from the hypervisor.
You must activate or enable some block devices before you can use the block devices. For
example, LVM volumes must be running.
Chapter 2. Planning the environment 17
File
A disk image file is a file that represents a local hard disk to the virtual machine. This
representation is a virtual hard disk. The size of the disk image file determines the maximum
size of the virtual hard disk. A disk image file of 100 GB can produce a virtual hard disk of 100
GB.
The disk image file is in a location outside of the virtual machine. Other than the size of the
disk image file, the virtual machine cannot access any other information about the disk image
file. The disk image file is in the file system of any block devices shown in Figure 2-1 on
page 15 that are mounted on KVM. However, disk image files can also be located across a
network connection in a remote file system, for example.
The following file types are supported by QEMU:
Raw
A raw type of disk image file preallocates all of the storage space that the virtual machine
uses when the file is created. The file resides in the KVM file system, and it requires less
overhead than QEMU Copy On Write (QCOW2).
Example 2-4 shows a sample .xml file to define a raw image file.
Example 2-4 Sample .xml to use a raw type of disk image file
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/var/lib/libvirt/images/sl12sp0.img'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
</disk>
QCOW2
QCOW uses a disk storage optimization strategy that delays the allocation of storage until
it is actually needed. A QCOW2 disk image file grows as data is written. QCOW2 starts
with a smaller size than the raw disk image file. QCOW2 can use the file system space of
the KVM host more efficiently.
Example 2-5 shows a sample .xml file that defines a QCOW2 image file.
Example 2-5 Sample .xml to use QCOW2 disk image file
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/sl12sp0.qcow2'/>
<target dev='vda' bus='virtio'/>
</disk>
A virtual machine that uses block devices for local mass storage typically performs better than
a virtual machine that uses disk image files for the following reasons:
Managing the file system where the disk image file is located creates an additional
resource demand for I/O operations.
Improper partitioning of mass storage using disk image files can cause unnecessary I/O
operations.
18 Getting Started with KVM for IBM z Systems
However, disk image files provide the following benefits:
Containment
Many disk image files can be in a single storage unit. For example, disk image files can be
located on disks, partitions, logical volumes, and other storage units.
Usability
Managing multiple files is easier than managing multiple disks, multiple partitions, multiple
logical volumes, multiple arrays, and other storage units.
Mobility
You can easily move files from one location or system to another location or system.
Cloning
You can easily copy and modify files for new VMs to use.
Sparse files save space
Using a file system that supports sparse files conserves unaccessed disk space.
Remote and network accessibility
Files can be in file systems on remote systems that are connected by a network.
2.2.3 Network consideration
KVM can provide network devices as virtual Ethernet devices by configuring direct MacVTap2
connections or Open vSwitch connections. To set up a virtual network on KVM, for the
purposes of this book, we considered the following factors:
For redundancy of network devices, we considered bonding two IBM Open Systems
Adapters (OSAs). Both MacVTap and Open vSwitch can be configured with a bonding
device.
In a cloud environment, it is typical to separate the management network from the data
network. For isolation between multiple networks, we prepared and set up separate OSA
devices, each connected to a different network.
As of this writing, Open vSwitch is supported by IBM Cloud Manager with OpenStack, but
MacVTap is not yet supported.
We chose to use Open vSwitch in our configuration because it is supported by IBM Cloud
Manager with OpenStack. Open vSwitch also provides more flexibility and ease of
management by the use of a command-line interface (CLI) and a database that stores
network information, but it reduces complexity as compared to MacVTap managed by CLI and
an .xml file.
Important: Whether you use SCSI devices or ECKD devices, disk multipathing for
virtual machines is not required. For SCSI devices, disk multipathing is handled by KVM
for IBM z System. For ECKD devices, the I/O paths are handled by PR/SM in z Systems
hardware.
2
MacVTap is a new device driver meant to simplify virtualized bridged networking. For more information, see
http://virt.kernelnewbies.org/MacVTap
Chapter 2. Planning the environment 19
2.2.4 Software consideration
To operate Linux on z Systems as a virtual machine of KVM for IBM z Systems, a Linux on z
Systems distribution must be obtained from a Linux distribution partner. SUSE Linux
Enterprise Server (SLES) 12 SP1 is supported in KVM for IBM z Systems hypervisor virtual
machines.
2.2.5 Live migration
To perform a live migration, the source and destination hosts must be connected and have
access to the same or equivalent system resources, and to the same storage devices and
networks. There are no restrictions on the location of the destination host; it can run on
another LPAR on another server or on another z System.
Consider system resources, storage, network, and performance when you prepare for the
migration of a virtual machine to another host, and do so carefully. Details are available in the
KVM Virtual Server Management section of the IBM Knowledge Center:
http://ibm.co/1PD9s89
2.3 Planning KVM virtual machine management
Libvirt3 is a management tool that installs with KVM. You can create, delete, run, stop, and
manage your virtual machines using the virsh command, which is provided as part of the
API. Virsh operations rely on the ability of the library to connect to a running libvirtd daemon.
Therefore, the daemon must be running before using virsh.
When you plan to manage a virtual environment on KVM as one of the resources in the cloud,
IBM Cloud Manager with OpenStack can support it. To manage your virtual environment with
IBM Cloud Manager with OpenStack, you will need to review the hardware, operating system,
and software prerequisites of IBM Cloud Manager with OpenStack. IBM Cloud Manager with
OpenStack supports KVM for IBM z Systems as compute nodes. You also need to consider
KVM for IBM z Systems prerequisites in a virtualization environment:
IBM Cloud Manager with Open Stack prerequisites
http://ibm.co/1OiaXWb
KVM for IBM z Systems prerequisites
http://ibm.co/1PD9zRg
2.4 Planning a cloud infrastructure with KVM and
IBM Cloud Manager with OpenStack
In this book, we illustrate a simple scenario for building a cloud infrastructure with KVM and
IBM Cloud Manager with OpenStack to evaluate the virtualization and management
functions. These functions include the ability to create, delete, run, and stop the virtual
machine, to create a virtual network and virtual storage, to perform live migration, and to
clone a virtual machine. This section provides information to review before building your cloud
environment.
3 Libvirt is a management tool that installs with KVM. Visit http://wiki.libvirt.org/page/Virtio
20 Getting Started with KVM for IBM z Systems
In this section, we describe planning considerations and information about the following
situations:
KVM installation
Virtual machines
IBM Cloud Manager with OpenStack installation
IBM Cloud Manager with OpenStack deployment
If you plan to build and manage a virtual environment using only KVM, skip the following
sections:
2.4.3, “Planning for IBM Cloud Manager with OpenStack installation” on page 22
2.4.4, “Planning for IBM Cloud Manager with OpenStack deployment” on page 24
2.4.1 Planning for KVM for IBM z Systems installation
This section describes the considerations for installing KVM for IBM z Systems. Then we
outline the information required for the installation process.
Planning considerations
Consider the following areas before installing KVM for IBM z Systems:
Number of CPUs in LPAR
This depends on the number of virtual CPUs needed and the level of planned
over-commitment.
Amount of memory in LPAR
This depends on the memory needed for the virtual machines and the level of planned
memory over-commitment.
DVD or FTP installation
As described in 2.1.3, “Installation methods” on page 14, it is possible to start the
installation from HMC using a DVD drive or from an FTP server. This depends on your
environment.
Type of storage
Choose either SCSI or ECKD devices that KVM for IBM z Systems will use.
Storage space for virtual machines
Consider how to provide storage to virtual machines. For example, do you plan to use
whole disks attached to virtual machines or a QCOW2 file? Do you plan to expand LVM?
Number of OSA ports and networking
KVM for IBM z Systems needs only one OSA port. However, to provide redundancy, it is
suggested that you use a bonding interface and more than one OSA port.
Networking for virtual machines
Consider how your virtual machines will be connected to the LAN. For example, will you be
using MacVTap or Open vSwitch? Will you use VLANs? If you will be using Open Switch,
how many Open vSwitches are needed?
Chapter 2. Planning the environment 21
Information required for installation
The following is a list of information that you will need during installation:
FTP information
IP address of the FTP server, FTP directory with required files, FTP credentials
OSA device address
The OSA triplet which will be used to create the KVM for IBM z Systems network interface
card (NIC)
Networking information
For KVM for IBM z Systems, the IP address, network mask, default gateway, and host
name
VLAN (if needed)
Parent interface of VLAN, VLAN ID
DNS (if needed)
IP addresses of DNS servers, search domain
Network time protocol (NTP) (if needed)
Addresses of NTP servers to be used by KVM for IBM z
Installation disks
If you are installing on SCSI devices, the following information is required to establish a
path to the related storage:
– FCP device address
– The target WWPN (disk storage subsystem WWPN)
– LUN ID
If installing on ECKD devices, the DASD device address is required.
Root password
The password for the root user
2.4.2 Planning for virtual machines
This section describes the considerations for virtual machines. Then, we outline the
information required for the installation process.
Planning considerations
Consider the following areas before installing a virtual machine:
Number of virtual CPUs
Amount of memory
Virtual machines need to have enough memory to avoid paging. However, too much
memory for a virtual machine will leave less shared memory for other virtual machines.
Installation source
Storage space for virtual machines
Consider how to provide storage to virtual machines. For example, do you plan to use
whole disks attached to virtual machines, or a QCOW2 file? Do you plan to expand LVM?
22 Getting Started with KVM for IBM z Systems
I/O drivers
Use virtio drivers. There are no specific drivers for SCSI, ECKD, and NICs in virtual
machines.
Multipath
No disk multipathing is needed in virtual machine. All of that is handled by KVM. See the
shaded box marked “Important” on page 18 for further information.
Networking
Plan how many virtual network adapters will be needed for a virtual machine and whether
they will handle VLAN tags.
Information required for installation
The following list depends on the operating system that will be installed. This type of
information is required during installation:
FTP information (assuming FTP installation)
IP address of FTP server, FTP directory with required files, FTP user identification and
password
Networking information
Virtual machine IP address, network mask and default gateway, host name
VLAN
Parent interface of VLAN, VLAN ID
DNS (if needed)
IP addresses of DNS servers, search domain
NTP (if needed)
IP addresses of NTP servers to be used by the virtual machine
File system layout
2.4.3 Planning for IBM Cloud Manager with OpenStack installation
This section describes areas to consider for when planning to install IBM Cloud Manager with
OpenStack. Then, we outline the information that is required for the installation process.
If you plan to build and manage a virtual environment using only KVM, skip this section.
Planning considerations
Consider the following before installing IBM Cloud Manager with OpenStack:
Hardware
The deployment server and controller for IBM Cloud Manager with OpenStack 4.3 do not
support installation on a z Systems platform. An x86 server, with its CPU, memory, disk,
and NIC, is needed for the cloud environment. For detailed information about the hardware
prerequisites, see IBM Cloud Manager with OpenStack hardware prerequisites in the IBM
Knowledge Center:
http://ibm.co/1SJUM54
Also, consider whether you will install and run the deployment server, controller, and
database server on the same or separate nodes.
Chapter 2. Planning the environment 23
Operating systems
At the time of writing, Red Hat Enterprise Linux Version 7.1 (64-bit) is supported for the
deployment and controller servers on an x86 server.
Database server
Determine the database server product that will be used for IBM Cloud Manager with
OpenStack databases. As of this writing, supported databases are IBM DB2®, Maria DB,
and My SQL.
Yum repository
Use Red Hat Subscription Management or a local yum repository.
Installation method
Install from DVDs or by downloading and installing packages using CLI, GUI, or silent
installation.
Information required for installation
The following information is required during installation:
Networking information
IP address, network mask and default gateway, host name with a fully qualified domain
name that includes the domain suffix
DNS server
IP address of the DNS server which has the host name for the deployment server
Yum repository
IP address or host name of the repository server and directory
Root password or user ID with root authority
Root authority is required to run the installer
NTP server
IP addresses of NTP servers to be used by the deployment server and all nodes
Systemd4 status
Must be in running status because the product installer requires a functional systemd
environment and systemd is used to manage the service state of the Chef server
4
systemd is a suite of basic building blocks for a Linux system. Visit
http://www.freedesktop.org/wiki/Software/systemd/.
24 Getting Started with KVM for IBM z Systems
2.4.4 Planning for IBM Cloud Manager with OpenStack deployment
This section describes considerations for deploying the controller and compute node. Then,
we outline the information required for the deployment process.
If you plan to build and manage a virtual environment using only KVM, skip this section.
Planning considerations
Consider the following before deploying cloud environment components, such as the
controller node, compute node, and database node:
Topology
There are five kinds of predefined topologies provided by IBM Cloud Manager with
OpenStack. A description of each topology is shown in Table 5-1 on page 79. Consider
which topology will be used.
Database server
Determine the database server product that will be used for IBM Cloud Manager with
OpenStack databases. As of this writing, supported databases are DB2, Maria DB, and
My SQL.
Number of NICs
Only one NIC is needed for the management network of KVM for IBM z Systems as a
compute node. However, if you want virtual machines on compute node to use the DHCP
and L3 services provided by Neutron5
, the controller and compute nodes must have at
least two NICs: One for the management network and one for the data network.
Network type
Determine one of network types among local, flat, VLAN, generic routing encapsulation
(GRE), and virtual extensible LAN (VXLAN).
Web browsers
Select a web browser on your desktop environment as the client to access the IBM Cloud
Manager with OpenStack servers. These are the minimum supported versions:
– Internet Explorer 11.0 with latest fix pack
– Firefox 31 with latest fix pack
– Chrome 38 with latest fix pack
– Safari 7 with latest fix pack
Information required for deployment
This list depends on the topology that will be used, but this type of information is usually
required during installation:
Controller node
Environment name
IP address
Network interface name
Open vSwitch network type
Fully qualified domain name
The root user login information, either password or Secure Shell (SSH) or identity file
5
OpenStack Networking (neutron), see either:
http://docs.openstack.org/icehouse/install-guide/install/apt/content/basics-networking-neutron.html
or https://wiki.openstack.org/wiki/Neutron#OpenStack_Networking_.28.22Neutron.22.29
Chapter 2. Planning the environment 25
Compute node for KVM for IBM z Systems
Topology name of compute node
Environment name
Fully qualified domain name
The root user login information (either password or SSH identity file)
IP address
Network interface name
Deployment of virtual machines
Network information, including subnet, IP address for the subnet, IP address of gateway,
IP version, DNS server
Image source location and image file name
Image format (for example QCOW2)
Minimum disk and minimum RAM (if needed)
26 Getting Started with KVM for IBM z Systems
© Copyright IBM Corp. 2015. All rights reserved. 27
Chapter 3. Installing and configuring the
environment
This chapter provides the step-by-step instructions that were performed to build our KVM
environment. It contains three parts:
Our configuration
Describes our installation goal, together with the resources we used
Setting up KVM for IBM z Systems
Explains the preparation, installation, and configuration steps
Deploying virtual machines
Lists the domain definition and the Linux on z Systems installation
3
28 Getting Started with KVM for IBM z Systems
3.1 Our configuration
This section describes our target configuration and the components and hardware resources
that we use to implement it.
3.1.1 Logical view
Figure 3-1 illustrates a logical view of our target configuration. Our goal is to allow virtual
machines to connect to two different networks: One for management traffic and the other for
user data traffic. This is achieved by creating two separate Open vSwitch bridges. KVM for
IBM z Systems is connected directly to the management network.
We implemented two KVM for IBM z Systems images with the same logical configuration so
that the virtual servers can be migrated between hypervisors as needed.
Figure 3-1 Logical configuration
3.1.2 Physical resources
Figure 3-2 on page 29 shows our hardware and connectivity setup:
One IBM z13 with two LPARs
Two OSA cards connected to the management network
Two OSA cards connected to a data network
Multiple FICON cards for connectivity to storage
– SCSI devices
– ECKD devices
One FTP server
One x86 server running IBM Cloud Manager with OpenStack (controller node)
Both LPARs have access to all resources. We used one LPAR for installing KVM for IBM z
Systems on SCSI devices and the other LPAR for installing KVM for IBM z on ECKD devices.
Open vSwitch
(vsw-mgmt)
Open vSwitch
(vsw-data)
Management
Network
Data
Network
Virtual
Machine
Virtual
Machine
Virtual
Machine
KVM
Management
Chapter 3. Installing and configuring the environment 29
Figure 3-2 Our environment - hardware resources and connectivity
3.1.3 Preparation tasks
There are several tasks to perform before the KVM for IBM z installer can be started, which
we explain in the subsections that follow:
Input/output configuration data set (IOCDS)
Storage area network (SAN)
FTP server
Input/output configuration data set (IOCDS)
An IOCDS was prepared to support our environment, as shown in Figure 3-2. We had two
logical partitions (A25 and A2F) with different channel types (OSA CHPIDs, FCP CHPIDs,
and FICON CHPIDs).
30 Getting Started with KVM for IBM z Systems
An IOCDS sample for the LPARs and each channel type is provided in Example 3-1.
Example 3-1 Sample IOCDS definitions
******************************************************
**** Sample LPAR and Channel Subsystem ******
******************************************************
RESOURCE PARTITION=((CSS(0),(A25,5),(A2F,F)))
******************************************************
**** Sample OSA CHPID / CNTLUNIT and IODEVICE ******
******************************************************
CHPID PATH=(CSS(0),04),SHARED, *
PARTITION=((CSS(0),(A25,A2F),(=))), *
PCHID=214,TYPE=OSD
CNTLUNIT CUNUMBR=2D00, *
PATH=((CSS(0),04)), *
UNIT=OSA
IODEVICE ADDRESS=(2D00,015),CUNUMBR=(2D00),UNIT=OSA
IODEVICE ADDRESS=(2D0F,001),UNITADD=FE,CUNUMBR=(2D00), *
UNIT=OSAD
******************************************************
**** Sample FCP CHPID / CNTLUNIT and IODEVICE ******
******************************************************
CHPID PATH=(CSS(0),76),SHARED, *
PARTITION=((CSS(0),(A25,A2F),(=))), *
PCHID=1B1,TYPE=FCP
CNTLUNIT CUNUMBR=B600, *
PATH=((CSS(0),76),UNIT=FCP
IODEVICE ADDRESS=(B600,032),CUNUMBR=(B600),UNIT=FCP
IODEVICE ADDRESS=(B6FC,002),CUNUMBR=(B600),UNIT=FCP
******************************************************
**** Sample FICON CHPID / CNTLUNIT and IODEVICE ******
******************************************************
CHPID PATH=(CSS(0),48),SHARED, *
PARTITION=((CSS(0),(A25,A2F),(=))), *
SWITCH=61,PCHID=11D,TYPE=FC
CNTLUNIT CUNUMBR=6200, *
PATH=((CSS(0),48)),UNITADD=((00,256)), *
LINK=((CSS(0),08)),CUADD=2,UNIT=2107
IODEVICE ADDRESS=(6200,042),CUNUMBR=(6200),STADET=Y,UNIT=3390B
IODEVICE ADDRESS=(622A,214),CUNUMBR=(6200),STADET=Y,SCHSET=1, *
UNIT=3390A
For more information about IOCDS, see Stand-Alone Input/Output Configuration Program
User’s Guide, IBM System z, SB10-7152:
http://www.ibm.com/support/docview.wss?uid=pub1sb10715206
Chapter 3. Installing and configuring the environment 31
Storage area network (SAN)
The SAN configuration usually involves tasks such as cabling, zoning, and LUN masking. We
defined 10 LUNs on disk storage and targeted the worldwide port names (WWPNs) of the
disk adapters.
FTP server
We used an FTP server with IP address 192.168.60.15 and FTP user credentials. We created
two directories in the FTP directory: KVM and SLES12SP1. In each directory, we created a
DVD1 directory to which we mounted the corresponding .iso file.
Because the DVD1 directory is mounted as read-only, and because we needed to create
various .ins and .prm files, we copied the DVD1/images directory to the main KVM directory
and created .ins files in that directory. Then, we created corresponding .prm files in the
images/ directory.
The resulting structure looks like this:
KVM/
– DVD1/ (KVM for IBM z ISO image mounted as read-only)...
– images/
• generic.prm
• initrd.addrsize
• initrd.img
• install.img
• itso1.prm
• itso2.prm
• kernel.img
• TRANS.TBL
• upgrade.img
– itso1.ins
– itso2.ins
SLES12SP1/
– DVD1/ (SLES12SP1 ISO image mounted as read-only)
3.2 Setting up KVM for IBM z Systems
This section list the steps needed to install KVM for IBM z, from preparation tasks, through
the installation process, to the final configuration for our environment.
We describe the following tasks in this section:
Preparing the .ins and .prm files
Installing KVM for IBM z
Configuring KVM for IBM z
Note: This section shows the installation and configuration of KVM for IBM z with SCSI
devices. There are only subtle changes when installing on ECKD devices, as described in
Appendix A, “Installing KVM for IBM z Systems with ECKD devices” on page 95.
32 Getting Started with KVM for IBM z Systems
3.2.1 Preparing the .ins and .prm files
As described in “FTP server” on page 31, we had an FTP server to use for installing KVM for
IBM z. We created a directory structure that contained the .ins and .prm files needed for the
KVM for IBM z installer.
Example 3-2 shows the contents of the itso1.ins file, which is a copy of generic.prm file
provided in the DVD1 directory. Only the line pointing to itso1.prm was modified.
Example 3-2 itso1.ins
* for itsokvm1
images/kernel.img 0x00000000
images/initrd.img 0x02000000
images/itso1.prm 0x00010480
images/initrd.addrsize 0x00010408
Example 3-3 shows the itso1.prm file. It defines LUNs for the installer, network properties,
and the location of the FTP repository.
Example 3-3 itso1.prm
ro ramdisk_size=40000 rd.zfcp=0.0.b600,0x500507680120bc24,0x0000000000000000
rd.zfcp=0.0.b600,0x500507680120bc24,0x0001000000000000
rd.zfcp=0.0.b600,0x500507680120bc24,0x0002000000000000
rd.zfcp=0.0.b700,0x500507680120bb91,0x0000000000000000
rd.zfcp=0.0.b700,0x500507680120bb91,0x0001000000000000
rd.zfcp=0.0.b700,0x500507680120bb91,0x0002000000000000
rd.znet=qeth,0.0.2d00,0.0.2d01,0.0.2d02,layer2=1,portno=0,portname=DUMMY
ip=192.168.60.70::192.168.60.1:255.255.255.0:itsokvm1:enccw0.0.2d00:none
inst.repo=ftp://ftp:ftp@192.168.60.15/KVM/DVD1
Each rd.zfcp statement contains three parameters which, together, define a path to a
LUN. The first parameter defines the FCP device on the server side (actually, a device
from IOCDS). The second parameter defines the target WWPN, which is a WWPN of disk
storage. The third parameter provides a LUN number. This means that the rd.zfcp
statements in Example 3-3 define two different paths to each of three LUNs.
The rd.znet statement defines which device triplet is used as the NIC for an installer.
The ip statement defines the IP properties for the NIC.
The inst.repo statement defines the location of the install repositories for KVM for IBM z.
In our case, this is the read-only directory of a loop-mounted ISO image.
Chapter 3. Installing and configuring the environment 33
3.2.2 Installing KVM for IBM z
This section describes the steps for installing KVM for IBM z with SCSI devices.
Figure 3-3 shows two logical partitions: A25 and A2F. Both partitions are active without a
running operating system.
Figure 3-3 Two unused logical partitions
We installed KVM for IBM z using an FTP server.
Figure 3-4 shows how to invoke the Load from Removable Media, or Server panel by
selecting a target LPAR, clicking the small arrow icon next to its name, and selecting
Recovery and then Load from Removable Media, or Server task.
Figure 3-4 Invoke Load from Removable Media, or Server
34 Getting Started with KVM for IBM z Systems
Figure 3-5 shows the window in which we provided the IP address of our FTP server, together
with FTP credentials. The file location field points to the directory where we put our .ins files
as described in 3.1.3, “Preparation tasks” on page 29.
Figure 3-5 Load from Removable Media, or Server
When the FTP server is contacted, a table listing all of the .ins files displays. We chose the
itso1.ins file, as shown in Figure 3-6. This file contains all the necessary information for
installing KVM for IBM z on our SCSI devices.
Figure 3-6 Select the Software to Install window
Load is a disruptive action, which requires a confirmation as shown in Figure 3-7.
Figure 3-7 Task confirmation dialog
Chapter 3. Installing and configuring the environment 35
It takes time to load the installer. To see what was happening on the server, we opened the
operating system messages panel. When the installer was ready, it printed a message
prompting us to open a Secure Shell (SSH) connection, as shown in Figure 3-8. Notice that
all installer panels use the ncurses interface:
Figure 3-8 Operating system messages
After opening an SSH session, a panel opens (see Figure 3-9 on page 35) from which you
can select the language:
Use the Tab key to move among fields
Use the Enter key and spacebar to press a button
You can switch between installer, shell, and the debug panels by using Ctrl-Right or
Ctrl-Left arrow keys at any time during the installation.
Figure 3-9 Welcome to KVM for IBM z
After accepting the International Program License Agreement, IBM and non-IBM Terms and
Conditions, and confirming that you want to install KVM for IBM z, the panel for selecting
disks for installation displays.
36 Getting Started with KVM for IBM z Systems
Figure 3-10 shows the panel that displays the available LUNs. These are the three LUNs we
defined in the .prm file in 3.2.1, “Preparing the .ins and .prm files” on page 32. The LUNs are
recognized as multipathed devices. From this panel, it is not clear which mpath device
represents which LUN. Such information is useful for manual partitioning.
Figure 3-10 Devices to install KVM for IBM z to
To determine which mpath represents which LUN, we switched to shell using Ctrl-Right
Arrow. With the multipath command (see Example 3-4 on page 36) three interesting pieces
of information display:
mpathe represents LUN 0, mpatha represents LUN 1 and mpathf represents LUN 2.
On top of the two paths to each of our three LUNs specified in a parameter file, the
installer detected six additional available paths to each LUN.
Aside from the three LUNs specified in a parameter file, the installer discovered another
seven LUNs available to our LPAR.
Example 3-4 multipath output
[root@itsokvm1 ~]# multipath -l
mpathe (360050768018305e120000000000000ea) dm-4 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:3:0 sdr 65:16 active undef running
| |- 1:0:0:0 sde 8:64 active undef running
| |- 1:0:3:0 sdaa 65:160 active undef running
| `- 0:0:2:0 sda 8:0 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:4:0 sdaf 65:240 active undef running
|- 0:0:5:0 sdap 66:144 active undef running
|- 1:0:4:0 sdbi 67:192 active undef running
`- 1:0:5:0 sdbs 68:96 active undef running
mpathd (360050768018305e120000000000000f0) dm-3 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:2:6 sdi 8:128 active undef running
| |- 1:0:0:6 sdq 65:0 active undef running
| |- 0:0:3:6 sdab 65:176 active undef running
| `- 1:0:3:6 sdbe 67:128 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:4:6 sdal 66:80 active undef running
|- 0:0:5:6 sdav 66:240 active undef running
|- 1:0:4:6 sdbo 68:32 active undef running
`- 1:0:5:6 sdby 68:192 active undef running
Chapter 3. Installing and configuring the environment 37
mpathc (360050768018305e120000000000000ed) dm-2 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:4:3 sdai 66:32 active undef running
| |- 0:0:5:3 sdas 66:192 active undef running
| |- 1:0:4:3 sdbl 67:240 active undef running
| `- 1:0:5:3 sdbv 68:144 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:2:3 sdd 8:48 active undef running
|- 1:0:0:3 sdm 8:192 active undef running
|- 0:0:3:3 sdx 65:112 active undef running
`- 1:0:3:3 sdbb 67:80 active undef running
mpathb (360050768018305e120000000000000ee) dm-1 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:2:4 sdf 8:80 active undef running
| |- 1:0:0:4 sdo 8:224 active undef running
| |- 0:0:3:4 sdy 65:128 active undef running
| `- 1:0:3:4 sdbc 67:96 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:4:4 sdaj 66:48 active undef running
|- 0:0:5:4 sdat 66:208 active undef running
|- 1:0:4:4 sdbm 68:0 active undef running
`- 1:0:5:4 sdbw 68:160 active undef running
mpatha (360050768018305e120000000000000eb) dm-0 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:4:1 sdag 66:0 active undef running
| |- 0:0:5:1 sdaq 66:160 active undef running
| |- 1:0:4:1 sdbj 67:208 active undef running
| `- 1:0:5:1 sdbt 68:112 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:2:1 sdb 8:16 active undef running
|- 1:0:0:1 sdg 8:96 active undef running
|- 0:0:3:1 sdt 65:48 active undef running
`- 1:0:3:1 sdaz 67:48 active undef running
mpathj (360050768018305e120000000000000f2) dm-9 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 1:0:0:8 sdu 65:64 active undef running
| |- 0:0:3:8 sdad 65:208 active undef running
| |- 0:0:2:8 sdl 8:176 active undef running
| `- 1:0:3:8 sdbg 67:160 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:4:8 sdan 66:112 active undef running
|- 0:0:5:8 sdax 67:16 active undef running
|- 1:0:4:8 sdbq 68:64 active undef running
`- 1:0:5:8 sdca 68:224 active undef running
mpathi (360050768018305e120000000000000f3) dm-8 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:4:9 sdao 66:128 active undef running
| |- 0:0:5:9 sday 67:32 active undef running
| |- 1:0:4:9 sdbr 68:80 active undef running
| `- 1:0:5:9 sdcb 68:240 active undef running
38 Getting Started with KVM for IBM z Systems
`-+- policy='service-time 0' prio=0 status=enabled
|- 1:0:0:9 sdw 65:96 active undef running
|- 0:0:2:9 sdn 8:208 active undef running
|- 0:0:3:9 sdae 65:224 active undef running
`- 1:0:3:9 sdbh 67:176 active undef running
mpathh (360050768018305e120000000000000f1) dm-7 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:4:7 sdam 66:96 active undef running
| |- 0:0:5:7 sdaw 67:0 active undef running
| |- 1:0:4:7 sdbp 68:48 active undef running
| `- 1:0:5:7 sdbz 68:208 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:2:7 sdj 8:144 active undef running
|- 0:0:3:7 sdac 65:192 active undef running
|- 1:0:0:7 sds 65:32 active undef running
`- 1:0:3:7 sdbf 67:144 active undef running
mpathg (360050768018305e120000000000000ef) dm-6 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:4:5 sdak 66:64 active undef running
| |- 0:0:5:5 sdau 66:224 active undef running
| |- 1:0:4:5 sdbn 68:16 active undef running
| `- 1:0:5:5 sdbx 68:176 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 1:0:0:5 sdp 8:240 active undef running
|- 0:0:2:5 sdh 8:112 active undef running
|- 0:0:3:5 sdz 65:144 active undef running
`- 1:0:3:5 sdbd 67:112 active undef running
mpathf (360050768018305e120000000000000ec) dm-5 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 1:0:0:2 sdk 8:160 active undef running
| |- 0:0:2:2 sdc 8:32 active undef running
| |- 0:0:3:2 sdv 65:80 active undef running
| `- 1:0:3:2 sdba 67:64 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:4:2 sdah 66:16 active undef running
|- 0:0:5:2 sdar 66:176 active undef running
|- 1:0:4:2 sdbk 67:224 active undef running
`- 1:0:5:2 sdbu 68:128 active undef running
Chapter 3. Installing and configuring the environment 39
Example 3-5 shows the output confirming that only three LUNs are configured for use, as
specified in the parameter file, although 10 LUNs were discovered.
Example 3-5 lszfcp output
[root@itsokvm1 ~]# lszfcp -D
0.0.b600/0x500507680120bc24/0x0000000000000000 0:0:0:0
0.0.b600/0x500507680120bc24/0x0001000000000000 0:0:0:1
0.0.b600/0x500507680120bc24/0x0002000000000000 0:0:0:2
0.0.b600/0x500507680130bc24/0x0000000000000000 0:0:1:0
0.0.b600/0x500507680130bc24/0x0001000000000000 0:0:1:1
0.0.b600/0x500507680130bc24/0x0002000000000000 0:0:1:2
0.0.b600/0x500507680120bb91/0x0000000000000000 0:0:2:0
0.0.b600/0x500507680120bb91/0x0001000000000000 0:0:2:1
0.0.b600/0x500507680120bb91/0x0002000000000000 0:0:2:2
0.0.b600/0x500507680130bb91/0x0000000000000000 0:0:3:0
0.0.b600/0x500507680130bb91/0x0001000000000000 0:0:3:1
0.0.b600/0x500507680130bb91/0x0002000000000000 0:0:3:2
0.0.b700/0x500507680120bc24/0x0000000000000000 1:0:0:0
0.0.b700/0x500507680120bc24/0x0001000000000000 1:0:0:1
0.0.b700/0x500507680120bc24/0x0002000000000000 1:0:0:2
0.0.b700/0x500507680130bc24/0x0000000000000000 1:0:1:0
0.0.b700/0x500507680130bc24/0x0001000000000000 1:0:1:1
0.0.b700/0x500507680130bc24/0x0002000000000000 1:0:1:2
0.0.b700/0x500507680120bb91/0x0000000000000000 1:0:2:0
0.0.b700/0x500507680120bb91/0x0001000000000000 1:0:2:1
0.0.b700/0x500507680120bb91/0x0002000000000000 1:0:2:2
0.0.b700/0x500507680130bb91/0x0000000000000000 1:0:3:0
0.0.b700/0x500507680130bb91/0x0001000000000000 1:0:3:1
0.0.b700/0x500507680130bb91/0x0002000000000000 1:0:3:2
Figure 3-11 shows that we selected all three configured LUNs that KVM for IBM z will be
installed on. In this panel, we can define additional devices if needed.
Figure 3-11 Selected devices
40 Getting Started with KVM for IBM z Systems
Figure 3-12 shows the panel in which we can select automatic or manual partitioning. For our
installation, we chose automatic partitioning because we did not have any particular
requirements for the system layout.
Figure 3-12 Select partition method
Figure 3-13 shows the partition summary panel.
Figure 3-13 Partition summary panel
Next, we chose the time zone as depicted in Figure 3-14.
Figure 3-14 Time zone selection
Chapter 3. Installing and configuring the environment 41
In most installations, it is a required to have a common time source among all components in
the IT environment. The IBM z Systems platform uses Server Time Protocol (STP) as its time
source provider, so we did not enable NTP servers as shown in Figure 3-15.
Figure 3-15 NTP configuration
Figure 3-16 shows the panel for network configuration. A NIC named enccw0.0.2d00 was
already set online by the installer. This NIC was specified in the parameter file that is
described in 3.2.1, “Preparing the .ins and .prm files” on page 32. If no network was specified
in the parameter file, or if we needed to configure another card, this panel would have allowed
it. We decided to check whether the IP information for the NIC was set as specified in the
parameter file.
Figure 3-16 Configure network
Figure 3-17 shows the configuration of the enccw0.0.2d00 NIC. All of the parameters were
correctly read from the parameter file, and no changes were needed.
Figure 3-17 Network device configuration
42 Getting Started with KVM for IBM z Systems
We did not need to configure another NIC, so we went to the next panel, as shown in
Figure 3-18.
Figure 3-18 Configure network
Figure 3-19 shows the DNS configuration panel. The value in the Hostname field was read
from the parameter file. We did not provide any other DNS parameters because they were not
needed in our environment.
Figure 3-19 DNS configuration
Figure 3-20 shows the installation summary.
Figure 3-20 Installation summary
Chapter 3. Installing and configuring the environment 43
If there were existing partitions or volume groups, the panel shown in Figure 3-21 would
inform us that they were going to be removed.
Figure 3-21 Partitions and LVMs to be removed
After pressing Ok, the installation begins. The progress bar shown in Figure 3-22 reports the
installation status.
Figure 3-22 Installation progress
After the installation process is finished, the panel shown in Figure 3-23 opens. After a reboot,
KVM for IBM z Systems is ready for use.
Figure 3-23 Reboot after installation
3.2.3 Configuring KVM for IBM z
This section describes several additional tasks we needed to perform in our environment after
KVM for IBM z was installed.
“Identifying out IPL device”
“Applying maintenance” on page 45
“Defining NICs” on page 46
“Defining Open vSwitches” on page 48
“Adding LUNs” on page 50
44 Getting Started with KVM for IBM z Systems
Identifying out IPL device
During the installation we used automatic partitioning, and we had no control over which LUN
was to be used as the initial program load (IPL) device. Example 3-6 shows that the /boot
mount point resides on device 360050768018305e120000000000000ec.
Example 3-6 Find /boot device
[root@itsokvm1 ~]# mount |grep boot
/dev/mapper/360050768018305e120000000000000ec1 on /boot type ext4
(rw,relatime,seclabel,data=ordered)
Example 3-7 shows the output from the multipath command. It shows that device
360050768018305e120000000000000ec maps to LUN 2.
Example 3-7 multipath output
[root@itsokvm1 ~]# multipath -l
360050768018305e120000000000000ec dm-0 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:0:2 sdd 8:48 active undef running
| |- 0:0:1:2 sda 8:0 active undef running
| |- 1:0:0:2 sdf 8:80 active undef running
| `- 1:0:1:2 sdh 8:112 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:2:2 sdb 8:16 active undef running
|- 0:0:3:2 sdc 8:32 active undef running
|- 1:0:2:2 sdi 8:128 active undef running
`- 1:0:3:2 sdj 8:144 active undef running
360050768018305e120000000000000eb dm-6 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:2:1 sdr 65:16 active undef running
| |- 0:0:3:1 sdt 65:48 active undef running
| |- 1:0:2:1 sdv 65:80 active undef running
| `- 1:0:3:1 sdx 65:112 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:0:1 sdw 65:96 active undef running
|- 0:0:1:1 sdq 65:0 active undef running
|- 1:0:0:1 sds 65:32 active undef running
`- 1:0:1:1 sdu 65:64 active undef running
360050768018305e120000000000000ea dm-1 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:0:0 sdn 8:208 active undef running
| |- 0:0:1:0 sde 8:64 active undef running
| |- 1:0:0:0 sdk 8:160 active undef running
| `- 1:0:1:0 sdm 8:192 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:2:0 sdg 8:96 active undef running
|- 0:0:3:0 sdl 8:176 active undef running
|- 1:0:2:0 sdp 8:240 active undef running
`- 1:0:3:0 sdo 8:224 active undef running
Chapter 3. Installing and configuring the environment 45
Figure 3-24 shows how to IPL KVM for IBM z from the correct LUN when needed.
Figure 3-24 Load window
Applying maintenance
At the time of writing, Fix Pack 1 (FP1) was available from
http://www.ibm.com/support/fixcentral/
After downloading the code, we followed the steps provided in the README file, which
accompanied FP1. Example 3-8 shows the commands that we executed, as instructed.
Example 3-8 Applying fixes
[root@itsokvm1 ~]# ll
total 152360
-rw-r--r--. 1 root root 156010496 Sep 22 11:11 KVMIBM-1.1.0.1-20150911-s390x.iso
-rw-r--r--. 1 root root 3260 Sep 22 11:11 README
[root@itsokvm1 ~]# mkdir -p /mnt/FIXPACK
[root@itsokvm1 ~]# mount -o ro,loop KVMIBM-1.1.0.1-20150911-s390x.iso /mnt/FIXPAC
CK/
[root@itsokvm1 ~]# ls -l /mnt/FIXPACK/
total 41
dr-xr-xr-x. 2 1055 1055 2048 Sep 10 18:00 apar_db
-r-xr-xr-x. 1 1055 1055 33836 Sep 10 18:00 ibm_apar.sh
-r--r--r--. 1 1055 1055 3266 Sep 10 18:00 README
dr-xr-xr-x. 4 1055 1055 2048 Sep 10 18:00 Updates
[root@itsokvm1 ~]# cd /mnt/FIXPACK
[root@itsokvm1 FIXPACK]# ./ibm_apar.sh -y /mnt/FIXPACK/Updates/
Generating local repository to /mnt/FIXPACK/Updates/ ..
fixpack.repo :
[FIXPACK]
name=IBM FixPack ISO
baseurl=file:///mnt/FIXPACK/Updates/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-KVM-FOR-IBM
46 Getting Started with KVM for IBM z Systems
Copy fixpack.repo to /etc/yum.repos.d/ ? [y/N]y
/tmp//fixpack.repo -> /etc/yum.repos.d/fixpack.repo
Installation of REPO FIXPACK successful
[root@itsokvm1 FIXPACK]# ./ibm_apar.sh -a
Fetching packages from yum...
Creating APAR dependency list...
Analysing the available APAR against installed rpms
APAR | Status | Subject
-------------------------------------------------------------
ZZ00466 | NONE | FP1 fix collection (128088)
[root@itsokvm1 FIXPACK]# ./ibm_apar.sh -i latest
Found latest available APAR: ZZ00466
...
Do you want to continue with installation [y/N]y
Clean expirable cache files..
...
Total download size: 147 M
Is this ok [y/d/N]: y
Downloading packages:
...
Complete!
Processing done.
[root@itsokvm1 FIXPACK]# ./ibm_apar.sh -a
Fetching packages from yum...
Creating APAR dependency list...
Analysing the available APAR against installed rpms
APAR | Status | Subject
-------------------------------------------------------------
ZZ00466 | APPLIED | FP1 fix collection (128088)
[root@itsokvm1 FIXPACK]# reboot
Defining NICs
As described in 3.1, “Our configuration” on page 28, our environment needed more than one
NIC to support two different LANs for virtual servers, each LAN connected through a bonding
interface. Our image contains only one NIC, as shown in Example 3-9. It is a NIC that
provides access to KVM for IBM z.
Example 3-9 Checking configured NICs
[root@itsokvm1 ~]# znetconf -c
Device IDs Type Card Type CHPID Drv. Name
State
--------------------------------------------------------------------------------
0.0.2d00,0.0.2d01,0.0.2d02 1731/01 OSD_1000 04 qeth enccw0.0.2d00
online
Chapter 3. Installing and configuring the environment 47
Example 3-10 shows a list of unconfigured NICs available to our environment.
Example 3-10 Checking available NICs
[root@itsokvm1 ~]# znetconf -u
Scanning for network devices...
Device IDs Type Card Type CHPID Drv.
------------------------------------------------------------
0.0.2d03,0.0.2d04,0.0.2d05 1731/01 OSA (QDIO) 04 qeth
0.0.2d06,0.0.2d07,0.0.2d08 1731/01 OSA (QDIO) 04 qeth
0.0.2d09,0.0.2d0a,0.0.2d0b 1731/01 OSA (QDIO) 04 qeth
0.0.2d0c,0.0.2d0d,0.0.2d0e 1731/01 OSA (QDIO) 04 qeth
0.0.2d20,0.0.2d21,0.0.2d22 1731/01 OSA (QDIO) 05 qeth
0.0.2d23,0.0.2d24,0.0.2d25 1731/01 OSA (QDIO) 05 qeth
0.0.2d26,0.0.2d27,0.0.2d28 1731/01 OSA (QDIO) 05 qeth
0.0.2d29,0.0.2d2a,0.0.2d2b 1731/01 OSA (QDIO) 05 qeth
0.0.2d2c,0.0.2d2d,0.0.2d2e 1731/01 OSA (QDIO) 05 qeth
0.0.2d40,0.0.2d41,0.0.2d42 1731/01 OSA (QDIO) 06 qeth
0.0.2d43,0.0.2d44,0.0.2d45 1731/01 OSA (QDIO) 06 qeth
0.0.2d46,0.0.2d47,0.0.2d48 1731/01 OSA (QDIO) 06 qeth
0.0.2d49,0.0.2d4a,0.0.2d4b 1731/01 OSA (QDIO) 06 qeth
0.0.2d4c,0.0.2d4d,0.0.2d4e 1731/01 OSA (QDIO) 06 qeth
0.0.2d60,0.0.2d61,0.0.2d62 1731/01 OSA (QDIO) 07 qeth
0.0.2d63,0.0.2d64,0.0.2d65 1731/01 OSA (QDIO) 07 qeth
0.0.2d66,0.0.2d67,0.0.2d68 1731/01 OSA (QDIO) 07 qeth
0.0.2d69,0.0.2d6a,0.0.2d6b 1731/01 OSA (QDIO) 07 qeth
As shown in Figure 3-2 on page 29, we chose to use devices 2d03, 2d23, 2d43, and 2d63 to
connect our Open vSwitch bridges to the LAN. The devices need to be configured as Layer 2
devices, and they need to be able to provide bridging functions.
We configured them with the required parameters and confirmed that the needed devices
were online, as shown in Example 3-11.
Example 3-11 Configuring NICs online
[root@itsokvm1 ~]# znetconf -a 2d03 -o layer2=1 -o bridge_role=primary
Scanning for network devices...
Successfully configured device 0.0.2d03 (enccw0.0.2d03)
[root@itsokvm1 ~]# znetconf -a 2d23 -o layer2=1 -o bridge_role=primary
Scanning for network devices...
Successfully configured device 0.0.2d23 (enccw0.0.2d23)
[root@itsokvm1 ~]# znetconf -a 2d43 -o layer2=1 -o bridge_role=primary
Scanning for network devices...
Successfully configured device 0.0.2d43 (enccw0.0.2d43)
[root@itsokvm1 ~]# znetconf -a 2d63 -o layer2=1 -o bridge_role=primary
Scanning for network devices...
Successfully configured device 0.0.2d63 (enccw0.0.2d63)
[root@itsokvm1 ~]# znetconf -c
Device IDs Type Card Type CHPID Drv. Name
State
--------------------------------------------------------------------------------
0.0.2d00,0.0.2d01,0.0.2d02 1731/01 OSD_1000 04 qeth enccw0.0.2d00
online
0.0.2d03,0.0.2d04,0.0.2d05 1731/01 OSD_1000 04 qeth enccw0.0.2d03
online
48 Getting Started with KVM for IBM z Systems
0.0.2d23,0.0.2d24,0.0.2d25 1731/01 OSD_1000 05 qeth enccw0.0.2d23
online
0.0.2d43,0.0.2d44,0.0.2d45 1731/01 OSD_1000 06 qeth enccw0.0.2d43
online
0.0.2d63,0.0.2d64,0.0.2d65 1731/01 OSD_1000 07 qeth enccw0.0.2d63
online
Example 3-12shows a test of bridging capabilities of the newly configured NICs.
Example 3-12 Check bridging capabilities
[root@itsokvm1 ~]# cat /sys/class/net/enccw0.0.2d03/device/bridge_state
active
[root@itsokvm1 ~]# cat /sys/class/net/enccw0.0.2d23/device/bridge_state
active
[root@itsokvm1 ~]# cat /sys/class/net/enccw0.0.2d43/device/bridge_state
active
[root@itsokvm1 ~]# cat /sys/class/net/enccw0.0.2d63/device/bridge_state
active
We brought the NICs up online dynamically. These changes will not be persistent at system
restart. To make changes persistent, there must be corresponding ifcfg-enccw0.0.2dx3 files
in /etc/sysconfig/network-scripts directory.
An example of such a file is shown in Example 3-13. There must be a corresponding file
created for each NIC, or four files in our case.
Example 3-13 Make changes permanent
[root@itsokvm1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-enccw0.0.2d03
TYPE=Ethernet
BOOTPROTO=none
NAME=enccw0.0.2d03
DEVICE=enccw0.0.2d03
ONBOOT=yes
NETTYPE=qeth
SUBCHANNELS="0.0.2d03,0.0.2d04,0.0.2d05"
OPTIONS="layer2=1 bridge_reflect_promisc=primary buffer_count=128"
Defining Open vSwitches
As described in 3.1, “Our configuration” on page 28, we needed to create two Open
vSwitches (which shows as OVS in our examples). For KVM for IBM z to handle OVS, the
openvswitch service must be running. This service is not enabled by default. Example 3-14
shows the commands to check whether service is running, enable the service to be started
after a system restart, start the service dynamically, and check the status after the service is
started.
Example 3-14 openswitch service
[root@itsokvm1 ~]# ovs-vsctl show
ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such
file or directory)
[root@itsokvm1 ~]# systemctl status openvswitch
openvswitch.service - Open vSwitch
Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; disabled)
Active: inactive (dead)
Chapter 3. Installing and configuring the environment 49
[root@itsokvm1 ~]# systemctl enable openvswitch
ln -s '/usr/lib/systemd/system/openvswitch.service'
'/etc/systemd/system/multi-user.target.wants/openvswitch.service'
[root@itsokvm1 ~]# systemctl start openvswitch
[root@itsokvm1 ~]# systemctl status openvswitch
openvswitch.service - Open vSwitch
Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; enabled)
Active: active (exited) since Wed 2015-09-23 09:00:14 EDT; 3s ago
Process: 5366 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 5366 (code=exited, status=0/SUCCESS)
Sep 23 09:00:14 itsokvm2 systemd[1]: Starting Open vSwitch...
Sep 23 09:00:14 itsokvm2 systemd[1]: Started Open vSwitch.
[root@itsokvm1 ~]# ovs-vsctl show
bcd5c59b-b1fd-4f95-8f66-926c1ffdc227
ovs_version: "2.3.0"
We created two OVS bridges and added bonding interfaces consisting of two NICs to connect
each bridge to the LAN, as shown in Example 3-15.
Example 3-15 Create bridge and bond port
[root@itsokvm1 ~]# ovs-vsctl add-br vsw_mgmt
[root@itsokvm1 ~]# ovs-vsctl add-br vsw_data
[root@itsokvm1 ~]# ovs-vsctl add-bond vsw_mgmt bond0 enccw0.0.2d03 enccw0.0.2d43
[root@itsokvm1 ~]# ovs-vsctl add-bond vsw_data bond1 enccw0.0.2d23 enccw0.0.2d63
Example 3-16 shows the defined switches and their interfaces.
Example 3-16 Defined bridges
[root@itsokvm1 ~]# ovs-vsctl show
e7d10201-8a83-42db-a8c9-96aa7a9bb17c
Bridge vsw_mgmt
Port vsw_mgmt
Interface vsw_mgmt
type: internal
Port "bond0"
Interface "enccw0.0.2d43"
Interface "enccw0.0.2d03"
Bridge vsw_data
Port vsw_data
Interface vsw_data
type: internal
Port "bond1"
Interface "enccw0.0.2d63"
Interface "enccw0.0.2d23"
ovs_version: "2.3.0"
50 Getting Started with KVM for IBM z Systems
Adding LUNs
We decided to add two more LUNs to our environment to have more space available for qcow2
files. We added those two LUNs to the root volume group and extended the root file system
dynamically.
To make LUNs available to the system, we performed the steps outlined in Example 3-17.
The first multipath command output shows original setup where three LUNs were available.
Next, we added paths to two more LUNs into the /etc/zfcp.conf file. Then, we ran
zfcpconf.sh which reads the /etc/zfcp.conf file and makes devices from the file available to
the system. This is followed by another multipath command, which shows that the two new
LUNs became available.
Example 3-17 Adding LUNs
[root@itsokvm1 ~]# multipath -l
360050768018305e120000000000000ec dm-0 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 0:0:0:2 sdo 8:224 active ready running
| |- 0:0:2:2 sdu 65:64 active ready running
| |- 1:0:2:2 sda 8:0 active ready running
| `- 1:0:3:2 sdb 8:16 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 0:0:1:2 sdr 65:16 active ready running
|- 0:0:3:2 sdx 65:112 active ready running
|- 1:0:4:2 sdc 8:32 active ready running
`- 1:0:5:2 sdd 8:48 active ready running
360050768018305e120000000000000eb dm-1 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 0:0:1:1 sdq 65:0 active ready running
| |- 0:0:3:1 sdw 65:96 active ready running
| |- 1:0:4:1 sdg 8:96 active ready running
| `- 1:0:5:1 sdh 8:112 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 0:0:0:1 sdn 8:208 active ready running
|- 0:0:2:1 sdt 65:48 active ready running
|- 1:0:2:1 sde 8:64 active ready running
`- 1:0:3:1 sdf 8:80 active ready running
360050768018305e120000000000000ea dm-5 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 0:0:0:0 sdm 8:192 active ready running
| |- 0:0:2:0 sds 65:32 active ready running
| |- 1:0:2:0 sdi 8:128 active ready running
| `- 1:0:3:0 sdj 8:144 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 0:0:1:0 sdp 8:240 active ready running
|- 0:0:3:0 sdv 65:80 active ready running
|- 1:0:4:0 sdk 8:160 active ready running
`- 1:0:5:0 sdl 8:176 active ready running
[root@itsokvm1 ~]# vi /etc/zfcp.conf
0.0.b600 0x500507680130bc24 0x0002000000000000
0.0.b600 0x500507680120bb91 0x0002000000000000
0.0.b700 0x500507680120bc24 0x0002000000000000
0.0.b600 0x500507680130bb91 0x0002000000000000
Chapter 3. Installing and configuring the environment 51
0.0.b700 0x500507680130bc24 0x0002000000000000
0.0.b600 0x500507680120bc24 0x0002000000000000
0.0.b700 0x500507680120bb91 0x0002000000000000
0.0.b700 0x500507680130bb91 0x0002000000000000
0.0.b600 0x500507680130bc24 0x0000000000000000
0.0.b600 0x500507680120bb91 0x0000000000000000
0.0.b700 0x500507680120bc24 0x0000000000000000
0.0.b600 0x500507680130bb91 0x0000000000000000
0.0.b700 0x500507680130bc24 0x0000000000000000
0.0.b600 0x500507680120bc24 0x0000000000000000
0.0.b700 0x500507680130bb91 0x0000000000000000
0.0.b700 0x500507680120bb91 0x0000000000000000
0.0.b600 0x500507680130bc24 0x0001000000000000
0.0.b600 0x500507680120bb91 0x0001000000000000
0.0.b700 0x500507680120bc24 0x0001000000000000
0.0.b600 0x500507680130bb91 0x0001000000000000
0.0.b700 0x500507680130bc24 0x0001000000000000
0.0.b700 0x500507680120bb91 0x0001000000000000
0.0.b600 0x500507680120bc24 0x0001000000000000
0.0.b700 0x500507680130bb91 0x0001000000000000
0.0.b600 0x500507680130bc24 0x0003000000000000
0.0.b600 0x500507680120bb91 0x0003000000000000
0.0.b700 0x500507680120bc24 0x0003000000000000
0.0.b600 0x500507680130bb91 0x0003000000000000
0.0.b700 0x500507680130bc24 0x0003000000000000
0.0.b700 0x500507680120bb91 0x0003000000000000
0.0.b600 0x500507680120bc24 0x0003000000000000
0.0.b700 0x500507680130bb91 0x0003000000000000
0.0.b600 0x500507680130bc24 0x0004000000000000
0.0.b600 0x500507680120bb91 0x0004000000000000
0.0.b700 0x500507680120bc24 0x0004000000000000
0.0.b600 0x500507680130bb91 0x0004000000000000
0.0.b700 0x500507680130bc24 0x0004000000000000
0.0.b700 0x500507680120bb91 0x0004000000000000
0.0.b600 0x500507680120bc24 0x0004000000000000
0.0.b700 0x500507680130bb91 0x0004000000000000
[root@itsokvm1 ~]# zfcpconf.sh
[root@itsokvm1 ~]# multipath -l
360050768018305e120000000000000ee dm-11 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:2:4 sdag 66:0 active undef running
| |- 1:0:2:4 sdah 66:16 active undef running
| |- 0:0:0:4 sdai 66:32 active undef running
| `- 1:0:3:4 sdaj 66:48 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 1:0:4:4 sdal 66:80 active undef running
|- 0:0:1:4 sdak 66:64 active undef running
|- 0:0:3:4 sdan 66:112 active undef running
`- 1:0:5:4 sdam 66:96 active undef running
360050768018305e120000000000000ed dm-9 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:1:3 sdac 65:192 active undef running
| |- 1:0:5:3 sdae 65:224 active undef running
52 Getting Started with KVM for IBM z Systems
| |- 1:0:4:3 sdad 65:208 active undef running
| `- 0:0:3:3 sdaf 65:240 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:2:3 sdy 65:128 active undef running
|- 0:0:0:3 sdaa 65:160 active undef running
|- 1:0:3:3 sdab 65:176 active undef running
`- 1:0:2:3 sdz 65:144 active undef running
360050768018305e120000000000000ec dm-0 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:0:2 sdo 8:224 active undef running
| |- 0:0:2:2 sdu 65:64 active undef running
| |- 1:0:2:2 sda 8:0 active undef running
| `- 1:0:3:2 sdb 8:16 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:1:2 sdr 65:16 active undef running
|- 0:0:3:2 sdx 65:112 active undef running
|- 1:0:4:2 sdc 8:32 active undef running
`- 1:0:5:2 sdd 8:48 active undef running
360050768018305e120000000000000eb dm-1 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:1:1 sdq 65:0 active undef running
| |- 0:0:3:1 sdw 65:96 active undef running
| |- 1:0:4:1 sdg 8:96 active undef running
| `- 1:0:5:1 sdh 8:112 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:0:1 sdn 8:208 active undef running
|- 0:0:2:1 sdt 65:48 active undef running
|- 1:0:2:1 sde 8:64 active undef running
`- 1:0:3:1 sdf 8:80 active undef running
360050768018305e120000000000000ea dm-5 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:0:0 sdm 8:192 active undef running
| |- 0:0:2:0 sds 65:32 active undef running
| |- 1:0:2:0 sdi 8:128 active undef running
| `- 1:0:3:0 sdj 8:144 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:1:0 sdp 8:240 active undef running
|- 0:0:3:0 sdv 65:80 active undef running
|- 1:0:4:0 sdk 8:160 active undef running
`- 1:0:5:0 sdl 8:176 active undef running
Chapter 3. Installing and configuring the environment 53
The next step is to create partitions on the new LUNs, as shown in Example 3-18.
Example 3-18 Creating partitions
[root@itsokvm1 ~]# fdisk
/dev/disk/by-id/dm-uuid-mpath-360050768018305e120000000000000ed
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 22: Invalid argument.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root@itsokvm1 ~]# fdisk
/dev/disk/by-id/dm-uuid-mpath-360050768018305e120000000000000ee
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
54 Getting Started with KVM for IBM z Systems
WARNING: Re-reading the partition table failed with error 22: Invalid argument.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
The partprobe command forces the kernel to reread partitioning information. The ls
command, executed afterward, shows that the new partitions are available to the system. This
is shown in Example 3-19.
Example 3-19 Refresh partitioning information
[root@itsokvm1 ~]# partprobe
device-mapper: remove ioctl on 360050768018305e120000000000000eb1 failed: Device
or resource busy
Warning: parted was unable to re-read the partition table on
/dev/mapper/360050768018305e120000000000000eb (Device or resource busy). This
means Linux won't know anything about the modifications you made.
device-mapper: create ioctl on 360050768018305e120000000000000eb1 failed: Device
or resource busy
device-mapper: remove ioctl on 360050768018305e120000000000000eb1 failed: Device
or resource busy
device-mapper: remove ioctl on 360050768018305e120000000000000ea1 failed: Device
or resource busy
Warning: parted was unable to re-read the partition table on
/dev/mapper/360050768018305e120000000000000ea (Device or resource busy). This
means Linux won't know anything about the modifications you made.
device-mapper: create ioctl on 360050768018305e120000000000000ea1 failed: Device
or resource busy
device-mapper: remove ioctl on 360050768018305e120000000000000ea1 failed: Device
or resource busy
device-mapper: remove ioctl on 360050768018305e120000000000000ec3 failed: Device
or resource busy
device-mapper: remove ioctl on 360050768018305e120000000000000ec2 failed: Device
or resource busy
device-mapper: remove ioctl on 360050768018305e120000000000000ec1 failed: Device
or resource busy
Warning: parted was unable to re-read the partition table on
/dev/mapper/360050768018305e120000000000000ec (Device or resource busy). This
means Linux won't know anything about the modifications you made.
device-mapper: create ioctl on 360050768018305e120000000000000ec1 failed: Device
or resource busy
device-mapper: remove ioctl on 360050768018305e120000000000000ec1 failed: Device
or resource busy
device-mapper: create ioctl on 360050768018305e120000000000000ec2 failed: Device
or resource busy
device-mapper: remove ioctl on 360050768018305e120000000000000ec2 failed: Device
or resource busy
device-mapper: create ioctl on 360050768018305e120000000000000ec3 failed: Device
or resource busy
device-mapper: remove ioctl on 360050768018305e120000000000000ec3 failed: Device
or resource busy
[root@itsokvm1 ~]# ls -l /dev/mapper/
total 0
lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000ea ->
../dm-6
Chapter 3. Installing and configuring the environment 55
lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000ea1 ->
../dm-8
lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000eb ->
../dm-1
lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000eb1 ->
../dm-5
lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000ec ->
../dm-0
lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000ec1 ->
../dm-2
lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000ec2 ->
../dm-3
lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000ec3 ->
../dm-4
lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000ed ->
../dm-7
lrwxrwxrwx. 1 root root 8 Sep 24 14:17 360050768018305e120000000000000ed1 ->
../dm-11
lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000ee ->
../dm-9
lrwxrwxrwx. 1 root root 8 Sep 24 14:17 360050768018305e120000000000000ee1 ->
../dm-12
crw-------. 1 root root 10, 236 Sep 24 12:39 control
lrwxrwxrwx. 1 root root 8 Sep 24 14:17 zkvm1-root -> ../dm-10
These new partitions will be added to the root volume group (VG) later. However, for the
loader to be able to bring root VG up correctly, it needs to be aware of all of the LUNs that
form root VG. To achieve this, initramfs must be created and zipl updated, as shown in
Example 3-20. There is no need to modify the zipl.conf file, but zfcp.conf must contain all
relevant LUN information, as this file is read by dracut command.
Example 3-20 Modify initial ramdisk
[root@itsokvm1 ~]# dracut -f
[root@itsokvm1 ~]# zipl
Using config file '/etc/zipl.conf'
Run /lib/s390-tools/zipl_helper.device-mapper /boot
Building bootmap in '/boot'
Building menu 'zipl-automatic-menu'
Adding #1: IPL section '3.10.0-123.20.1.el7_0.kvmibm.15.s390x' (default)
Adding #2: IPL section 'linux'
Preparing boot device: dm-0.
Done.
Note: It is important to execute these two commands. Otherwise, the system will not come
up after reboot.
56 Getting Started with KVM for IBM z Systems
Example 3-21 shows the commands we executed to create physical volumes on new
partitions. Then the physical volumes were added to a volume group, the logical volume was
expanded, and the root file system was resized.
Example 3-21 Creating physical volumes
[root@itsokvm1 ~]# pvcreate /dev/mapper/360050768018305e120000000000000ed1
Physical volume "/dev/mapper/360050768018305e120000000000000ed1" successfully
created
[root@itsokvm1 ~]# pvcreate /dev/mapper/360050768018305e120000000000000ee1
Physical volume "/dev/mapper/360050768018305e120000000000000ee1" successfully
created
[root@itsokvm1 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/360050768018305e120000000000000ea1 zkvm lvm2 a-- 10.00g 0
/dev/mapper/360050768018305e120000000000000eb1 zkvm lvm2 a-- 10.00g 0
/dev/mapper/360050768018305e120000000000000ec3 zkvm lvm2 a-- 5.50g 0
/dev/mapper/360050768018305e120000000000000ed1 lvm2 a-- 10.00g 10.00g
/dev/mapper/360050768018305e120000000000000ee1 lvm2 a-- 10.00g 10.00g
Example 3-22 shows how to add physical volumes to the volume group. It shows volume
group information before and after the volume was extended, in addition to physical volume
information after the new physical volumes were added to the volume group.
Example 3-22 Adding physical volumes to the volume group
[root@itsokvm1 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
zkvm 3 1 0 wz--n- 25.49g 0
[root@itsokvm1 ~]# vgextend zkvm /dev/mapper/360050768018305e120000000000000ed1
/dev/mapper/360050768018305e120000000000000ee1
Volume group "zkvm" successfully extended
[root@itsokvm1 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/360050768018305e120000000000000ea1 zkvm lvm2 a-- 10.00g 0
/dev/mapper/360050768018305e120000000000000eb1 zkvm lvm2 a-- 10.00g 0
/dev/mapper/360050768018305e120000000000000ec3 zkvm lvm2 a-- 5.50g 0
/dev/mapper/360050768018305e120000000000000ed1 zkvm lvm2 a-- 10.00g 10.00g
/dev/mapper/360050768018305e120000000000000ee1 zkvm lvm2 a-- 10.00g 10.00g
[root@itsokvm1 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
zkvm 5 1 0 wz--n- 45.48g 19.99g
Chapter 3. Installing and configuring the environment 57
Example 3-23 shows the lvextend command together with logical volume information before
and after running the lvextend command.
Example 3-23 Extending a logical volume and resizing the file system
[root@itsokvm1 ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
root zkvm -wi-ao---- 25.49g
[root@itsokvm1 ~]# lvextend /dev/mapper/zkvm-root -L +19G
Extending logical volume root to 44.49 GiB
Logical volume root successfully resized
[root@itsokvm1 ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
root zkvm -wi-ao---- 44.49g
Example 3-24 shows resizing of the root file system. It also shows the output of the df
command before and after resizing.
Example 3-24 Resizing the root file system
[root@itsokvm1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/zkvm-root 25G 3.9G 20G 17% /
devtmpfs 32G 0 32G 0% /dev
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 32G 8.5M 32G 1% /run
tmpfs 32G 0 32G 0%
/sys/fs/cgroup
/dev/mapper/360050768018305e120000000000000ec1 488M 80M 373M 18% /boot
[root@itsokvm1 ~]# resize2fs /dev/mapper/zkvm-root
resize2fs 1.42.9 (28-Dec-2013)
Filesystem at /dev/mapper/zkvm-root is mounted on /; on-line resizing required
old_desc_blocks = 4, new_desc_blocks = 6
The filesystem on /dev/mapper/zkvm-root is now 11662336 blocks long.
[root@itsokvm1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/zkvm1-root 44G 3.9G 38G 10% /
devtmpfs 32G 0 32G 0% /dev
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 32G 8.5M 32G 1% /run
tmpfs 32G 0 32G 0%
/sys/fs/cgroup
/dev/mapper/360050768018305e120000000000000ec1 488M 80M 373M 18% /boot
Additional space provided by new LUNs is now available to KVM for IBM z for use.
58 Getting Started with KVM for IBM z Systems
3.3 Deploying virtual machines
This section describes the steps we performed in KVM for IBM z for defining a domain and
installing a Linux on z Systems virtual machine into that domain.
The following tasks are described in this section:
3.3.1, “Preparing the environment” on page 58
3.3.2, “Installing Linux on z Systems” on page 61
3.3.3, “Modifying domain definitions” on page 61
3.3.4, “Linux on z Systems configuration” on page 63
3.3.1 Preparing the environment
Example 3-25 shows that a 5 GB qcow2 file is being created, which is provided as a virtual
disk to the virtual machine.
Example 3-25 qcow2 disk
root@itsokvm1 ~]# cd /var/lib/libvirt/images/
root@itsokvm1 images]# qemu-img create -f qcow2 linux80.img 5G
Formatting 'linux80.img', fmt=qcow2 size=5368709120 encryption=off
cluster_size=65536 lazy_refcounts=off refcount_bits=16
The initial ramdisk and kernel files are needed for Linux on z Systems installation. We
obtained them from the installation DVD on the FTP server and renamed them to suit this
scenario, as depicted in Example 3-26.
Example 3-26 Obtaining files
[root@itsokvm1 images]# curl
ftp://ftp:ftp@192.168.60.15/SLES12SP1/DVD1/boot/s390x/cd.ikr > s12-kernel.boot
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 45.3M 100 45.3M 0 0 22.8M 0 0:00:01 0:00:01 --:--:-- 22.8M
[root@itsokvm1 images]# curl
ftp://ftp:ftp@192.168.60.15/SLES12SP1/DVD1/boot/s390x/initrd > s12-initrd.boot
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 29.3M 100 29.3M 0 0 16.4M 0 0:00:01 0:00:01 --:--:-- 16.4M
Lastly, we created domain definition files in .xml format. We found it convenient to create two
files for a domain: One for installation purposes and one for regular use after installation.
Chapter 3. Installing and configuring the environment 59
Example 3-27 shows the linux80.xml.install file, which contains definitions for booting the
installation files.
Example 3-27 linux80.xml.install
<domain type='kvm'>
<name>linux80</name>
<description>Guest-System Suse Sles12</description>
<memory>524288</memory>
<vcpu>1</vcpu>
<cputune>
</cputune>
<os>
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<!-- Boot kernel . remove 3 lines after successful installation -->
<kernel> /var/lib/libvirt/images/s12-kernel.boot</kernel>
<initrd>/var/lib/libvirt/images/s12-initrd.boot</initrd>
<cmdline>HostIP=192.168.60.80/24 Hostname=linux80.itso.ibm.com
Gateway=192.168.60.1 Layer2=1 Install=ftp://ftp:ftp@192.168.60.15/SLES12SP1/DVD1/
UseVNC=1 VNCPassword=12345678 InstNetDev=virtio Manual=0 </cmdline>
<boot dev='hd'/>
</os>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>preserve</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-s390x</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/linux80.img'/>
<target dev='vda' bus='virtio'/>
<address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0002'/>
</disk>
<interface type="bridge">
<source bridge="vsw_mgmt"/>
<virtualport type="openvswitch"/>
<model type="virtio"/>
</interface>
<console type='pty'>
<target type='sclp' port='0'/>
</console>
</devices>
</domain>
60 Getting Started with KVM for IBM z Systems
Example 3-28 shows a definition of the linux80.xml file. The kernel, initrd, and cmdline
statements were removed. One more network interface was defined for the vsw-data OVS
bridge.
Example 3-28 linux80.xml
<domain type='kvm'>
<name>linux80</name>
<description>Guest-System Suse Sles12</description>
<memory>524288</memory>
<vcpu>1</vcpu>
<cputune>
</cputune>
<os>
<type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
<boot dev='hd'/>
</os>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>preserve</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-s390x</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/linux80.img'/>
<target dev='vda' bus='virtio'/>
<address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0002'/>
</disk>
<interface type="bridge">
<source bridge="vsw_mgmt"/>
<virtualport type="openvswitch"/>
<model type="virtio"/>
</interface>
<interface type="bridge">
<source bridge="vsw_data"/>
<virtualport type="openvswitch"/>
<model type="virtio"/>
</interface>
<console type='pty'>
<target type='sclp' port='0'/>
</console>
</devices>
</domain>
Chapter 3. Installing and configuring the environment 61
3.3.2 Installing Linux on z Systems
Example 3-29 shows how we defined and started the linux80 domain. Because its .xml file
points to installation initial RAM disk and kernel, it starts the installation of Linux on z
Systems.
Example 3-29 Defining and starting Linux on z Systems installation
[root@itsokvm1 images]# virsh define linux80.xml.install
Domain linux80 defined from linux80.xml.install
[root@itsokvm1 ~]# virsh start linux80 --console
Domain linux80 started
Connected to domain linux80
...
starting VNC server...
A log file will be written to: /var/log/YaST2/vncserver.log ...
***
*** You can connect to <host>, display :1 now with vncviewer
*** Or use a Java capable browser on http://<host>:5801/
***
(When YaST2 is finished, close your VNC viewer and return to this window.)
Active interfaces:
eth0 Link encap:Ethernet HWaddr 52:54:00:A4:E3:B5
inet addr:192.168.60.80 Bcast:192.168.60.255 Mask:255.255.255.0
--
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
*** Starting YaST2 ***
Linux on z Systems can be installed using the virtual network computing (VNC) interface.
Installing Linux on z Systems in a KVM for IBM z environment is no different than any other
Linux on z Systems installation. For more details, including installation panel captures, see
The Virtualization Cookbook for IBM z Systems Volume 3: SUSE Linux Enterprise Server 12,
SG24-8890:
http://www.redbooks.ibm.com/abstracts/sg248890.html?Open
3.3.3 Modifying domain definitions
After Linux on z Systems is installed, it is automatically rebooted. Because the domain
definition still specifies the same initial RAM disk and kernel as a boot device, the installation
process will be started again from the beginning. To get out of the console to execute virsh
commands, press Ctrl + ] (right bracket) to return to the shell.
62 Getting Started with KVM for IBM z Systems
Example 3-30 shows the commands we executed to redefine the linux80 domain:
The destroy command shuts down the virtual machine.
The undefine command removes the domain definition from KVM for IBM z. Linux on z
Systems was installed in the qcow2 file and can be used in the new domain definition.
The define command defines the linux80 domain again, this time from an .xml file that
defines the virtual hard disk as a boot device.
The edit command allows you to make changes to an existing virtual machine
configuration file. A text editor will open with the contents of the given .xml file.
Example 3-30 Redefine domain
[root@itsokvm1 images]# virsh destroy linux80
Domain linux80 destroyed
[root@itsokvm1 images]# virsh undefine linux80
Domain linux80 has been undefined
[root@itsokvm1 images]# virsh define linux80.xml
Domain linux80 defined from linux80.xml
After the domain is redefined, restart it again. Now, the previously installed Linux on z
Systems server has been brought up from the virtual disk, as shown in Example 3-31.
Example 3-31 Start virtual machine
[root@itsokvm1 images]# virsh start linux80 --console
Domain linux80 started
Connected to domain linux80
...
+----------------------------------------------------------------------------+
|*SLES12-SP1 |
| Advanced options for SLES12-SP1 |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
+----------------------------------------------------------------------------+
...
Welcome to SUSE Linux Enterprise Server 12 SP1 Beta3 (s390x) - Kernel
3.12.47-2-default (ttysclp0).
linux80 login:
Chapter 3. Installing and configuring the environment 63
3.3.4 Linux on z Systems configuration
As described in 3.1.1, “Logical view” on page 28, our virtual servers need access to two
LANs. This is specific to each environment. During Linux on z Systems installation, one NIC
was configured which connects a virtual server to vsw_mgmt Open vSwitch. In 3.3.3,
“Modifying domain definitions” on page 61, we added another NIC which connects a virtual
server to the vsw_data network, but this NIC is not configured in Linux on z Systems.
We used YaST Control Center to configure the second NIC. When we exited YaST, we saw
that both network interfaces were configured, as shown in Example 3-32.
Example 3-32 Two NICs configured
linux80:~ # ifconfig
eth0 Link encap:Ethernet HWaddr 52:54:00:59:B3:CE
inet addr:192.168.60.80 Bcast:192.168.60.255 Mask:255.255.255.0
inet6 addr: fe80::5054:ff:fe59:b3ce/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:59934 errors:0 dropped:20 overruns:0 frame:0
TX packets:50302 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:6322856 (6.0 Mb) TX bytes:115456986 (110.1 Mb)
eth1 Link encap:Ethernet HWaddr 52:54:00:04:A9:80
inet addr:172.16.60.80 Bcast:172.16.60.255 Mask:255.255.255.0
inet6 addr: fe80::5054:ff:fe04:a980/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:759 errors:0 dropped:0 overruns:0 frame:0
TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:188951 (184.5 Kb) TX bytes:746 (746.0 b)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:13169 errors:0 dropped:0 overruns:0 frame:0
TX packets:13169 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:111754696 (106.5 Mb) TX bytes:111754696 (106.5 Mb)
64 Getting Started with KVM for IBM z Systems
© Copyright IBM Corp. 2015. All rights reserved. 65
Chapter 4. Managing and monitoring the
environment
In this chapter, we discuss various tools for managing KVM-based guest operating systems.
Specifically, we discuss the virsh and Nagios tools from the management and monitoring
perspectives, respectively. We also describe how virsh can be used for managing the guest
from a command-line interface (CLI).
The following topics are covered in this chapter:
KVM on IBM z System management interfaces
Using virsh
Monitoring KVM for IBM z Systems
4
66 Getting Started with KVM for IBM z Systems
4.1 KVM on IBM z System management interfaces
Although KVM provides a simple mechanism for sharing resources by isolating the
application’s software environment, most of the guest virtual machines incur some kind of
virtualization overhead. This overhead varies depending on the type of application, the type of
virtualization, and the virtual machine (VM) monitor used.
For I/O applications particularly, the overhead of CPU used by KVM on behalf of the VMs is
considerable and affects the performance characteristics of applications. This makes it
necessary to have tools for managing and monitoring the resources used by KVM for IBM z
Systems.
Figure 4-1 provides a high-level view of various interfaces and tools that are available for
virtual server management.
Figure 4-1 Management and monitoring interfaces
4.1.1 Introduction to the libvirt management stack
Libvirt, the virtualization application programming interface (API), provides a common layer of
abstraction and control for virtual machines that are deployed within many different
hypervisors, including KVM. The main components of libvirt are the control daemon, a stable
C language API, a corresponding set of Python language bindings, and a simple shell
environment. Currently, all KVM management tools (including virsh and OpenStack) use
libvirt as the underlying VM control mechanism. Libvirt stores information, such as the disk
image and networking configuration, in an .xml file. This file is independent of the hypervisor
in use.
Chapter 4. Managing and monitoring the environment 67
Figure 4-2 provides a pictorial view of the virsh interface with libivrt for virtual server
management.
Figure 4-2 Libvirt interface with virsh
Virsh
Virsh provides an easy-to-use console shell interface to the libvirt library for controlling guest
instances. Each of the commands available in virsh can be used either from the virsh
environment or called from a standard Linux console:
To start a virsh environment, run the virsh shell program with no options. This opens a new
console-like environment in which you can run any of the built-in commands for virsh.
To use the virsh commands from a Linux terminal, run virsh followed by the command
name and command options.
Custom scripting
Libvirt provides stable C Language APIs for VM management. Apart from this, libvirt supports
C and C++ directly and also provides a comprehensive set of Python language bindings. By
combining the libvirt API and Python, the libvirt module is intended to extend to all functions
that are needed for virtual server management on KVM for IBM z Systems.
4.2 Using virsh
Virsh is the main CLI for libvirt for managing virtual machines. Virsh provides many
commands. We begin by describing some of the basic commands. A complete list of
supported virsh commands in KVM for IBM z with detailed descriptions, is listed in KVM
Virtual Server Management, SC34-2752 in the IBM Knowledge Center:
https://www.ibm.com/support/knowledgecenter/linuxonibm/liaaf/lnz_r_va.html
Tip: For more information about how to create scripts to manage KVM virtual machines,
see http://www.ibm.com/developerworks/library/os-python-kvm-scripting1/
virsh
libvirt
QEMU
U
s
e
r
s
p
a
c
e
VM
Linux - Kernel
VM
68 Getting Started with KVM for IBM z Systems
4.2.1 Basic commands
This section describes basic virsh commands:
define Creates a virtual server with the unique name specified in the domain
configuration .xml file.
start Starts a defined virtual server. Using the --console option grants initial
access to the virtual server console and displays all messages that are
issued to the console.
shutdown Terminate a running virtual server, sending a shutdown signal to the
VM. This allows proper shutdown of an operating system of a VM.
destroy Immediately terminates a virtual server without any interaction with the
operating system running on a VM.
undefine Deletes the definition of a virtual server from libvirt.
list Without an option, this lists the running virtual servers. With the --all
option, this command lists all defined virtual servers.
edit Opens the libvirt internal definition of a VM and allows it to be
changed. These changes are not applied dynamically; they become
effective after a restart of the VM.
4.2.2 Add I/O resources dynamically
The virsh command attach-device enables you to add an I/O device dynamically to a VM. It
requires that an .xml definition file for the device be attached as input. The following
examples show how to add an I/O device to a VM.
Example 4-1 shows that, before running the attach-device command, there is only one disk
available in linux82.
Example 4-1 Before running the attach-device command
linux82:~ # ls -l /dev/vd*
brw-rw---- 1 root disk 254, 0 Sep 30 12:04 /dev/vda
brw-rw---- 1 root disk 254, 1 Sep 30 12:04 /dev/vda1
brw-rw---- 1 root disk 254, 2 Sep 30 12:04 /dev/vda2
brw-rw---- 1 root disk 254, 3 Sep 30 12:04 /dev/vda3
Example 4-2 shows an .xml definition of another LUN available to KVM for IBM z. It will be
visible in the VM as device vdb.
Example 4-2 The .xml definition
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native' iothread='0'/>
<source dev='/dev/mapper/360050768018305e120000000000000f2'/>
<target dev='vdb' bus='virtio'/>
</disk>
Chapter 4. Managing and monitoring the environment 69
Example 4-3 shows the command that reads the .xml file and attaches a device to a running
VM. This command attaches a disk to a VM temporarily. To make the change permanent, use
the --config parameter.
Example 4-3 The attach-device command
[root@itsokvm1 images]# virsh attach-device linux82 add_lun_8.xml
Device attached successfully
Example 4-4 shows the new device vdb available to VM linux82.
Example 4-4 After the attach-device
linux82:~ # ls -l /dev/vd*
brw-rw---- 1 root disk 254, 0 Sep 30 12:04 /dev/vda
brw-rw---- 1 root disk 254, 1 Sep 30 12:04 /dev/vda1
brw-rw---- 1 root disk 254, 2 Sep 30 12:04 /dev/vda2
brw-rw---- 1 root disk 254, 3 Sep 30 12:04 /dev/vda3
brw-rw---- 1 root disk 254, 16 Sep 30 13:39 /dev/vdb
brw-rw---- 1 root disk 254, 17 Sep 30 13:39 /dev/vdb1
4.2.3 VM live migration
The IBM Knowledge Center article titled KVM Virtual Server Management, SC34-2752
describes the details and considerations for migrating a virtual machine to another instance of
KVM for IBM z:
https://www.ibm.com/support/knowledgecenter/linuxonibm/liaaf/lnz_r_va.html
The most important requirement is to have equal I/O resources available in both
environments.
Default firewall settings on KVM for IBM z do not allow for live migration. Example 4-5 shows
the commands to execute on both of the KVM for IBM z images to allow for live migration
between them.
Example 4-5 Setting up a firewall to allow for live migration
root@itsokvm2 ~]# firewall-cmd --zone=public --add-port=49152-49215/tcp
--permanent
success
[root@itsokvm2 ~]# firewall-cmd --reload
Although we used IP addresses and not host names in the migrate command, we still needed
to create records for the target KVM for IBM z in the /etc/hosts file. Otherwise, the migrate
command reports an error.
70 Getting Started with KVM for IBM z Systems
Example 4-6 lists the running VMs on both KVM for IBM z images before the migration.
Example 4-6 List of running VMs before live migration
[root@itsokvm1 ~]# virsh list
Id Name State
----------------------------------------------------
2 linux80 running
19 instance-00000003 running
24 linux82 running
[root@itsokvm2 ~]# virsh list
Id Name State
----------------------------------------------------
Example 4-7 shows the actual migrate command that we executed.
Example 4-7 Live migration command
[root@itsokvm1 ~]# virsh migrate --live linux82 qemu+ssh://192.168.60.71/system
root@192.168.60.71's password:
Example 4-8 lists the running VMs on both KVM for IBM z Systems images after the
migration.
Example 4-8 List of running VMs after the live migration
[root@itsokvm1 ~]# virsh list
Id Name State
----------------------------------------------------
2 linux80 running
19 instance-00000003 running
[root@itsokvm2 ~]# virsh list
Id Name State
----------------------------------------------------
3 linux82 running
4.3 Monitoring KVM for IBM z Systems
For any virtualized environment, monitoring the hypervisor resources is crucial for predicting
bottlenecks and avoiding downtimes. The rest of this section focuses on monitoring and
describes the steps to configure the open source monitoring tool called Nagios.
4.3.1 Configuring the Nagios monitoring tool
Nagios is a monitoring tool that enables organizations to identify and resolve IT infrastructure
problems before they affect the business. If there is a failure, Nagios alerts the technical staff
about the problem, allowing them to begin the appropriate course of action.
In KVM for IBM z Systems, Nagios monitoring is enabled using the Nagios remote plug-in
executor (NRPE) which is the preferred method for remote monitoring of hosts.
Chapter 4. Managing and monitoring the environment 71
The following Nagios plug-ins are enabled in KVM for IBM z Systems:
Load average
Disk usage
Process count and resource usage
The next step is to prepare the configuration file /etc/nagios/nrpe.cfg with
environment-related attributes. The configuration file is backed up and then updates the
attributes.
Figure 4-3 The configuration file updates the attributes according to this process
Configuring the Nagios server
The NRPE daemon is designed to enable you to execute the Nagios plug-ins on remote Linux
or UNIX machines. The main reason for doing this is to allow Nagios to monitor local
resources (such as CPU load and memory usage) on remote machines. Because these
public resources are not usually exposed to external machines, an agent such as NRPE must
be installed on the remote Linux or UNIX machines on which the /etc/nagios/nrpe.conf file
needs to be configured. See Example 4-9.
Example 4-9 Attributes to change in the /etc/nagios/nrpe.conf file
server_address=192.168.60.70
allowed_hosts=127.0.0.1,192.168.60.15
command[check_users]=/usr/lib64/nagios/plugins/check_users -w 5 -c 10
command[check_load]=/usr/lib64/nagios/plugins/check_load -w 15,10,5 -c 30,25,20
command[check_hda1]=/usr/lib64/nagios/plugins/check_disk -w 20% -c 10% -p
/dev/mapper/zkvm1-root
command[check_zombie_procs]=/usr/lib64/nagios/plugins/check_procs -w 5 -c 10 -s Z
command[check_total_procs]=/usr/lib64/nagios/plugins/check_procs -w 150 -c 200
Important: In this section, we cover only setting up the Nagios NRPE plug-in that is
packaged with KVM for IBM z Systems (see Monitored Host in Figure 4-3). The NRPE
daemon requires that the Nagios plug-ins be installed on the remote Linux or UNIX host.
Without these, the daemon cannot monitor the nodes. For implementing the Nagios server,
see the Nagios Quickstart Installation Guides website:
https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/quickstart.html
Nagios Server Monitored Host
Nagios
NRPE NRPE
Check_load
Check_disk
KVM for IBM z Systems
72 Getting Started with KVM for IBM z Systems
Now we can start the NRPE daemon, as shown in Example 4-10.
Example 4-10 Start the NRPE daemon
[root@itsokvm1 nagios]# systemctl start nrpe.service
After the NRPE is started in the hypervisor, as shown in Example 4-11, verify that the port
(5666) used by NRPE is in a listening state.
Example 4-11 Starting NRPE in KVM for IBM z Systems
[root@itsokvm1 /]# systemctl status nrpe.service
nrpe.service - NRPE
Loaded: loaded (/usr/lib/systemd/system/nrpe.service; disabled)
Active: active (running) since Tue 2015-09-29 11:43:15 EDT; 1 day 3h ago
Process: 22566 ExecStart=/usr/sbin/nrpe -c /etc/nagios/nrpe.cfg -d $NRPE_SSL_OPT (code=exited,
status=0/SUCCESS)
Main PID: 22567 (nrpe)
CGroup: /system.slice/nrpe.service
ââ22567 /usr/sbin/nrpe -c /etc/nagios/nrpe.cfg -d
Sep 29 11:43:15 itsokvm1.itso.ibm.com nrpe[22567]: Starting up daemon
Sep 29 11:43:15 itsokvm1.itso.ibm.com systemd[1]: Started NRPE.
Sep 29 11:43:15 itsokvm1.itso.ibm.com nrpe[22567]: Server listening on 192.168.60.70 port 5666.
Sep 29 11:43:15 itsokvm1.itso.ibm.com nrpe[22567]: Listening for connections on port 0
Sep 29 11:43:15 itsokvm1.itso.ibm.com nrpe[22567]: Allowing connections from:
127.0.0.1,192.168.60.15
[root@itsokvm1 /]#
vi
[root@itsokvm1 /]# netstat -pant | grep nrpe
tcp 0 0 192.168.60.70:5666 0.0.0.0:* LISTEN 22567/nrpe
[root@itsokvm1 /]#
Next, we need to check whether the NRPE daemon is functioning properly. Execute the
check_nrpe plug-in. The plug-in becomes packaged with the Nagios tool for testing purposes.
From the Nagios server, execute the command shown in Example 4-12 with the IP address of
the server that needs to be monitored.
Example 4-12 Verification of NRPE communication with other remote hosts
[root@monitoring ~]# /usr/local/nagios/libexec/check_nrpe -H 192.168.60.70
NRPE v2.15
[root@monitoring ~]#
Chapter 4. Managing and monitoring the environment 73
Configuring the remote host (monitored)
Next, create a few object definitions so that you can monitor the remote Linux or UNIX
machine. We created a host.cfg file, shown in Example 4-13, in which our zKVM Hypervisor
template definition is inheriting default values from the generic-host template. We also defined
a new host for the remote itsozkvm1 and itsozkvm2 hosts that references our newly created
zKVM Hypervisor host template.
Example 4-13 The host.cfg file with object definitions
[root@monitoring etc]# pwd
/usr/local/nagios/etc
[root@monitoring etc]# cat hosts.cfg
## Default Linux Host Template ##
define host{
name zKVM HyperVisor ; Name of this template
use generic-host ; Inherit default values
check_period 24x7
check_interval 5
retry_interval 1
max_check_attempts 10
check_command check-host-alive
notification_period 24x7
notification_interval 30
notification_options d,r
contact_groups admins
register 0 ; DONT REGISTER THIS - ITS
A TEMPLATE
}
## Default
define host{
use zKVM HyperVisor ; Inherit default
values from a template
host_name itsozkvm1 ; The name we're
giving to this server
alias IBM KVM ; A longer name for the
server
address 192.168.60.70 ; IP address of Remote
Linux host
}
## Default
define host{
use zKVM HyperVisor ; Inherit default
values from a template
host_name itsozkvm2 ; The name we're
giving to this server
alias IBM KVM ; A longer name for the
server
address 192.168.60.71 ; IP address of Remote
Linux host
}
[root@monitoring etc]#
74 Getting Started with KVM for IBM z Systems
Next, define the built-in services for monitoring host system resources, as shown in
Example 4-14.
Example 4-14 Define the services that monitor system resources
[root@monitoring etc]# cat services.cfg
define service{
use generic-service
host_name itsozkvm1
service_description CPU Load
check_command check_nrpe!check_load
}
define service{
use generic-service
host_name itsozkvm1
service_description Total Processes
check_command check_nrpe!check_total_procs
}
define service{
use generic-service
host_name itsozkvm1
service_description Current Users
check_command check_nrpe!check_users
}
define service{
use generic-service
host_name itsozkvm1
service_description SSH Monitoring
check_command check_nrpe!check_disk
}
define service{
use generic-service
host_name itsozkvm1
service_description FTP Monitoring
check_command check_nrpe!check_procs
}
[root@monitoring etc]#
Chapter 4. Managing and monitoring the environment 75
After restarting Nagios services on the monitoring host, we can log in to the Nagios web
interface and see the new host and the service definitions for the remote KVM host included
in Nagios monitoring. In our case, these are itsokvm1 and itsokvm2, as shown in Figure 4-4.
Figure 4-4 Map of remote hosts managed by Nagios monitoring
Within a minute or two, Nagios shows the current status information for the KVM for IBM z
Systems host resources, in our case itsokvm1 and itsokvm2 KVM on IBM z System hosts, as
shown in Figure 4-5.
Figure 4-5 Remote host status
76 Getting Started with KVM for IBM z Systems
© Copyright IBM Corp. 2015. All rights reserved. 77
Chapter 5. Building a cloud environment
This chapter provides an overview of a reference implementation of IBM Cloud Manager with
OpenStack for KVM on IBM z Systems. We address the following topics:
Overview of IBM Cloud Manager with OpenStack V4.3
Installing, deploying, and configuring KVM on a cloud based on IBM z Systems
5
78 Getting Started with KVM for IBM z Systems
5.1 Overview of IBM Cloud Manager with OpenStack V4.3
In general, based on where organizations deploy cloud services and who can access these
services, there are two main types of cloud-computing models: public cloud and private cloud.
In public clouds, an organization offers resources as a service, usually over an internet
connection, typically for a pay-per-usage fee. In private clouds, the organization deploys
resources inside a firewall and self-manages those resources. Here, the resources and
services are not shared outside of the organization.
This chapter describes how to build a private cloud. IBM provides a complete ecosystem and
tools for building a highly effective private cloud, for which the following factors need to be
considered:
Security
Resilience
Performance
Scalability of thousands of nodes
Openness and heterogeneity
Interoperability
IBM z Systems is the ideal platform for an effective private cloud based on the following
requirements:
Openness and heterogeneity
The inherent strengths of IBM Cloud Manager with OpenStack and KVM for IBM z
Systems
Reliability, availability, and serviceability (RAS) features
The fundamental building blocks of IBM z Systems
5.1.1 IBM Cloud Manager with OpenStack version 4.3
IBM Cloud Manager with OpenStack is an easy-to-deploy, simple-to-use cloud management
software offering based on OpenStack with IBM enhancements. IBM Cloud Manager features
an IBM Self-Service portal for workload provisioning, virtual image management, and
monitoring. It is an innovative, cost-effective approach that also includes automation,
metering, and security for your virtualized environment.
IBM Cloud Manager with OpenStack supports KVM for IBM z Systems compute nodes. KVM
for IBM z Systems compute nodes must run in a z Systems logical partition. The KVM for IBM
z Systems compute node must satisfy the following requirements:
Operating system: KVM for IBM z Systems version 1.1
Hardware: zEC12/zBC12 or later
Important: Support for KVM for IBM z Systems has been included with Fix Pack 3 of IBM
Cloud Manager V4.3.
For further details about KVM for IBM z Systems prerequisites and support, see these IBM
Knowledge Center pages:
KVM for IBM z Systems prerequisites
http://ibm.co/1Lpru5Z
KVM for IBM z Systems
http://www.ibm.com/support/knowledgecenter/SSNW54_1.1.0
Chapter 5. Building a cloud environment 79
For IBM z Systems, IBM Cloud Manager transforms an installation of KVM on IBM z Systems
and the required storage and network infrastructure into an entry level private cloud solution
that provides the following functions:
Self-service portal
Automated provisioning of virtual machines (VMs)
Automated deprovisioning of VMs
Cloning and snapshots of workloads
Starting and stopping of VMs
Resizing existing VMs
Approval lifecycle
Email notifications
Billing and accounting
5.1.2 Environmental setup
As a starting point for our IBM Cloud Manager deployment, we reuse the same KVM host that
was deployed in Chapter 3, “Installing and configuring the environment” on page 27. Also, our
KVM for IBM z Systems host has met the required networking and storage prerequisites.
The next task is to set up a network topology.
IBM Cloud Manager with OpenStack solution provides a few predefined example topologies.
The following topologies are supported with IBM Cloud Manager with OpenStack V4.3.
Table 5-1 Supported topologies
Note: We suggest using the information from the follow web page to review the required
common tasks for getting started with IBM Cloud Manager with OpenStack:
Worksheet: Common production-level topologies
http://ibm.co/1MYaPYn
Topology Description
Minimal For product evaluation purposes. This topology is the simplest topology
and does not require any customization. Some basic customization is
supported for the KVM quick emulator (QEMU) compute hypervisor type.
Controller +n compute For smaller test or production environments. This topology provides a
single controller node, plus any number of compute nodes. You can
configure this topology for your specific needs, for example, you can
configure networking, the resource scheduler, and other advanced
customizations.
HA controller +n
compute
For larger test and production environments that require high availability
(HA) cloud controllers. This topology provides multiple HA controller nodes,
plus any number of compute nodes. You can configure this topology for
your specific needs.
Distributed database For larger test or production environments. This topology is similar to the
controller +n compute topology; however, the distributed database
topology allows the IBM Cloud Manager with OpenStack database service
to run on a separate node. It also supports advanced customization.
80 Getting Started with KVM for IBM z Systems
Controller node with multiple compute nodes
In this publication, we are setting up a cloud topology using a single controller node with
multiple compute nodes. Here, the controller node runs the basic OpenStack services,
including the KVM/QEMU Nova compute service. Figure 5-1 provides a high-level view of the
topology we used.
Figure 5-1 Topology - single controller with multiple compute nodes
When deploying the topology in Figure 5-1, we suggest using a two node configuration, in
which one node is the deployment server, and the other is the IBM Cloud Manager with
OpenStack single controller node. If the KVM/QEMU, KVM for z Systems, PowerKVM,
Hyper-V, or IBM z/VM compute hypervisor is used, then one or more systems is also required
to provide the IBM Cloud Manager with OpenStack compute nodes for the topology. The IBM
Cloud Manager controller nodes have significant CPU and memory requirements, as they
contain, at a high level, the Chef client, the IBM Cloud Manager, and the Self-Service portal.
Multi-region This topology is for larger test or production environments and can include
multiple hypervisor environments. This topology is similar to the controller
+n compute topology; however, you can separate hypervisors by region.
Each region has its own controller, but shares the same OpenStack
Keystone architecture and potentially the IBM Cloud Manager Dashboard.
Topology Description
Chef Server Solution Delivery Repo
Single Controller
Chef Client
IBM Cloud Manager Deployment Server
Database
Keystone
Nova
Cinder
Message Broker
Glance
Neutron
Optional Service
Self-Service Portal
Compute Node
Compute Node
Compute Node
Chef Client
Move-compute
Cellometer
Neutron
Service
Chapter 5. Building a cloud environment 81
Table 5-2 provides system information about the controller and compute node in our
environment.
Table 5-2 Controller and compute node environment information
5.2 Installing, deploying, and configuring KVM on a cloud
based on IBM z Systems
The process of deploying IBM Cloud Manager V4.3 is accomplished using a Chef server.
Chef is an open source automation framework for deploying resources on systems. The Chef
server code is included in the installation package for IBM Cloud Manager with OpenStack.
The following sections provide high-level steps for the cloud deployment process.
5.2.1 Installing and update IBM Cloud Manager with OpenStack V4.3
In this section, we install and update IBM Cloud Manager with OpenStack V4.3, which
requires the following steps:
1. Complete the prerequisites and create a YUM repository.
2. Install IBM Cloud Manager with OpenStack on the deployment server
3. Update the Chef server software from the Select Fixes web page on IBM Fix Central:
http://ibm.co/1NaGqdL
4. Verify that the Chef server is installed and running.
5.2.2 Deploying the IBM Cloud Manager topology
To deploy the IBM Cloud Manager topology, complete these steps:
1. Set up the RHEL 7.1YUM repository for reference by the Chef server.
2. Create and edit the environment file.
3. Create and edit the topology file.
4. Create, edit, and upload the data bags (if needed).
5. Deploy the topology.
6. Verify the deployment.
7. Configure IBM Cloud Manager with OpenStack V4.3.
For installation steps pertaining to IBM Cloud Manager with OpenStack, see Appendix B,
“Installing IBM Cloud Manager with OpenStack” on page 97.
Controller node Compute node
Operating System RHEL 7.1 KVM 1.1 for IBM z System z
Interfaces enp3s0 enccw0.0.2d00
Hostname controller.itso.ibm.com itoszkvm1.itso.ibm.com
IP Address 1 192.168.60.16 192.168.60.70
82 Getting Started with KVM for IBM z Systems
5.2.3 Creating a cloud environment
An environment is a way to map an organization’s real-life workflow to what can be configured
and managed when using the Chef server. In the following sections, we describe the steps
required to create your own cloud environment.
5.2.4 Environment templates
IBM Cloud Manager with OpenStack V4.3 has several prepackaged environments. Using the
knife command, we can list the environmental templates that are available. See
Example 5-1.
Example 5-1 List the default environmental templates
[root@controller ICM43]# knife environment list
_default
example-ibm-os-allinone
example-ibm-os-ha-controller-n-compute
example-ibm-os-single-controller-n-compute
example-ibm-sce
[root@controller ICM43]#
New cloud environment creation
Create a directory in the deployment node for storing environment and other topology files.
This directory is used by the Chef server for deployment purposes. In Example 5-2, we have
copied the templates for the single controller+n compute topology that we are going to deploy.
Example 5-2 Create your own environment
[root@controller itso_env]# knife environment show
example-ibm-os-single-controller-n-compute -d -Fjson > itso_cldenv.json
With the environment created (see Example 5-2), we can change the following attributes in
the new itso_env.json file:
Environment name
openstack.endpoints.host
openstack.endpoints.bind-host
openstack.endpoints.mq.hos
openstack.endpoints.db.host
ibm-sce.self-service.bind_interface
openstack.compute.virt_type
openstack.network.openvswitch.tenant_network_type = "gre"
openstack.network.openvswitch.bridge_mappings = ""
openstack.network.openvswitch.network_vlan_ranges = ""
openstack.network.openvswitch.bridge_mapping_interface = ""
openstack.network.ml2.tenant_network_types = "gre"
openstack.network.ml2.network_vlan_ranges = ""
openstack.network.ml2.flat_networks = ""
Tip: For the latest information about the attributes and parameters specific to KVM for IBM
z Systems, see Deploying an advanced configuration with KVM for IBM z Systems:
http://ibm.co/1MyQiMZ
Chapter 5. Building a cloud environment 83
Example 5-3 shows the results.
Example 5-3 Attributes that have been changed in the new environment json
{
"name": "itso_zkvm",
:
"endpoints": {
"host": "192.168.60.16",
"identity-admin": {
"port": "35357"
:
"bind-host": "192.168.60.16",
"mq": {
"host": "192.168.60.16",
"port": "5671"
:
"openstack": {
"endpoints": {
"network-openvswitch": {
"bind_interface": "ens192"
},
"compute-vnc-bind": {
"bind_interface": "ens192"
},
"compute-vnc-proxy-bind": {
"bind_interface": "ens192"
},
"compute-serial-console-bind": {
"bind_interface": "ens192"
}
:
"ml2": {
"type_drivers": "local,flat,vlan,gre,vxlan",
"tenant_network_types": "gre",
"mechanism_drivers": "openvswitch",
"flat_networks": "",
"network_vlan_ranges": "",
"tunnel_id_ranges": "1:1000",
"vni_ranges": "1001:2000"
},
"openvswitch": {
"tenant_network_type": "gre",
"network_vlan_ranges": "",
"enable_tunneling": "True",
"tunnel_type": "gre",
"tunnel_id_ranges": "1:1000",
"veth_mtu": 1500,
"tunnel_types": "gre,vxlan"
},
84 Getting Started with KVM for IBM z Systems
Register the environment with the Chef server
After the relevant changes have been made to the environment JSON file, we can register the
environment with the Chef server. See Example 5-4.
Example 5-4 Registering the environment with the Chef server
[root@controller itso_env]# knife environment from file itso_cldenv.json
Updated Environment itso_zkvm
[root@controller itso_env]# knife environment list
_default
example-ibm-os-allinone
example-ibm-os-ha-controller-n-compute
example-ibm-os-single-controller-n-compute
example-ibm-sce
itso_zkvm
[root@controller itso_env]#
5.2.5 Creating a controller topology
Now we can proceed with creating a controller topology. In doing so, we provide details about
the following items in an .xml topology file:
The controller node host name and authentication details
Which environment the specific controller node conforms to
The role the controller node will act as
Other optional components to deploy, such as the IBM Self-Service Portal
Example 5-5 Creating a controller topology
[root@controller itso_env]# cat cntrltop.json
{
"name":"cntrltop",
"description":"topology definition for ITSO demo",
"environment":"itso_zkvm",
"secret_file":"/opt/ibm/cmwo/chef-repo/data_bags/example_data_bag_secret",
"run_sequentially":false,
"nodes": [
{
"fqdn":"controller.itso.ibm.com",
"password":"password",
"quit_on_error":true,
"run_order_number":1,
"runlist": [
"role[ibm-os-single-controller-node]",
"role[ibm-sce-node]"
]
}
]
}
Chapter 5. Building a cloud environment 85
Deploying the controller topology file
With the controller topology file created, we can deploy the topology using the Chef server.
From the deployment server, the Chef server authenticates to the controller node and starts
the deployment of various components of IBM Cloud Manager with OpenStack.
Example 5-6 Deploying the controller node topology
[root@controller itso_env]# knife os manage deploy topology cntrltop.json
Deploying topology 'cntrltop' ...
The topology nodes are being deployed.
Deploying to nodes with run_order_number '1' in parallel.
Bootstrapping nodes...
Bootstrapping node ...
Doing old-style registration with the validation key at
/root/.chef/ibm-validator.pem...
Delete your validation key in order to use your user credentials instead
Connecting to controller.itso.ibm.com
controller.itso.ibm.com Starting Chef Client on Node
controller.itso.ibm.com Bootstrapping Node
controller.itso.ibm.com Synchronizing Cookbooks
controller.itso.ibm.com Compiling Cookbooks
Deploying bootstrapped nodes...
Writing FIPS setting to environment 'itso_zkvm'
Setting run list for node controller.itso.ibm.com...
controller.itso.ibm.com:
run_list:
role[ibm-os-single-controller-node]
role[ibm-sce-node]
controller.itso.ibm.com Converging Node
controller.itso.ibm.com Synchronizing Cookbooks
controller.itso.ibm.com Compiling Cookbooks
controller.itso.ibm.com Running Recipe chef_handler::default
:
:
:
controller.itso.ibm.com Running Recipe openstack-bare-metal::api
controller.itso.ibm.com Running Recipe apache2::default
controller.itso.ibm.com Running Recipe ibm-sce::installfp
controller.itso.ibm.com Completed
All nodes with run_order_number '1' deployed.
Results for deploy of topology 'cntrltop'
Results for nodes with run_order_number '1'
Deploy of node at controller.itso.ibm.com was successful.
Deploy of topology 'cntrltop.json' completed in 9708 seconds.
[root@controller itso_env]#
86 Getting Started with KVM for IBM z Systems
Verifying the controller node
With the deployment of the controller node completed, we need to verify that all of the
OpenStack services and components are properly deployed and working, as shown in
Example 5-7.
Example 5-7 Verification of Nova services
[root@controller etc]# nova-manage service list
Binary Host Z Zone Status State Updated_At
nova-conductor controller.itso.ibm.com internal enabled XXX 2015-09-24
14:33:50.450389
nova-scheduler controller.itso.ibm.com internal enabled XXX 2015-09-24
14:34:11.488662
nova-consoleauth controller.itso.ibm.com internal enabled XXX 2015-09-24
14:33:54.926035
nova-cert controller.itso.ibm.com internal enabled XXX 2015-09-24
14:34:05.490160
[root@controller etc]#
We have tailored various attributes in the environment JSON file. One of these is to use gre
and openvswitch for connectivity. Therefore, during deployment of the controller node, the
Chef server automatically converts the Ethernet flat network to a bridge. The Chef server also
enables, configures, and couples the Open vSwitch ports for connectivity. The result is shown
in Example 5-8.
Example 5-8 Open vSwitch network configuration
[root@controller ~]# ifconfig
br-ex: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.60.15 netmask 255.255.255.0 broadcast 192.168.60.255
inet6 fe80::216:41ff:feed:3cbd prefixlen 64 scopeid 0x20<link>
ether 00:16:41:ed:3c:bd txqueuelen 0 (Ethernet)
RX packets 1574 bytes 162929 (159.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 874 bytes 307373 (300.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::216:41ff:feed:3cbd prefixlen 64 scopeid 0x20<link>
ether 00:16:41:ed:3c:bd txqueuelen 1000 (Ethernet)
RX packets 2077 bytes 201279 (196.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 898 bytes 310586 (303.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 16
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 106408 bytes 14737594 (14.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 106408 bytes 14737594 (14.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@controller ~]# ovs-vsctl show
Chapter 5. Building a cloud environment 87
142779c0-fa4f-484f-ab1f-920642e9cdba
Bridge br-tun
fail_mode: secure
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port "enp3s0"
Interface "enp3s0"
ovs_version: "2.3.0"
[root@controller ~]#
5.2.6 Creating a compute node topology
With the successful deployment of the controller node, let’s deploy the KVM for IBM z
Systems compute node. Similar to what we did in Step 5.2.5, “Creating a controller topology”
on page 84 we need to create a topology for the compute node by providing the following
details:
Compute node host name and other authentication details
The environment that the specific controller node conforms to
The role of compute node, for example, ibm-os-compute-node-kvmibm
We also need to create a node-specific network attribute file. This file is only required
because the attributes of our compute node network are different from those defined in our
itso_zKVM environment file, itso_cldenv.json. For example, our controller node network
interface is ens192, but on the compute node we have a different network interface
(enccw0.0.2d00). So, using the attribute file as Example 5-9 on page 88 shows, we can
specify node-specific attributes.
Attention: Use care when providing the required attributes in the topology files, Some
customization options might not be supported for all hypervisor types, and some cannot be
configured after you deploy your cloud environment.
88 Getting Started with KVM for IBM z Systems
Example 5-9 System and network attributes file for the KVM for IBM z Systems compute node
[root@controller itso_env]# cat zkvmtop.json
{
"name":"zkvmtop",
"description":"topology definition for zkvm",
"environment":"itso_zkvm",
"secret_file":"/opt/ibm/cmwo/chef-repo/data_bags/example_data_bag_secret",
"run_sequentially":false,
"nodes": [
{
"fqdn":"itsokvm1.itso.ibm.com",
"password":"zlinux",
"quit_on_error":true,
"run_order_number":1,
"runlist": [
"role[ibm-os-compute-node-kvmibm]"
],
"attribute_file":"zkvm-attr.json"
}
]
}
[root@controller itso_env]# cat zkvm-attr.json
{
"openstack": {
"endpoints": {
"network-openvswitch": {
"bind_interface": "enccw0.0.2d00"
},
"compute-vnc-bind": {
"bind_interface": "enccw0.0.2d00"
},
"compute-vnc-proxy-bind": {
"bind_interface": "enccw0.0.2d00"
},
"compute-serial-console-bind": {
"bind_interface": "enccw0.0.2d00"
}
}
}
}
Chapter 5. Building a cloud environment 89
Compute node deployment
After customizing the topology and attribute files, proceed with the deployment of the
compute node topology, as shown in Example 5-10.
Example 5-10 Deploying the compute node topology using knife
[root@controller itso_env]# knife os manage deploy topology zkvmtop.json
Deploying topology 'zkvmtop' ...
The topology nodes are being deployed.
Deploying to nodes with run_order_number '1' in parallel.
Bootstrapping nodes...
Bootstrapping node ...
Doing old-style registration with the validation key at
/root/.chef/ibm-validator.pem...
Delete your validation key in order to use your user credentials instead
Connecting to itsokvm1.itso.ibm.com
itsokvm1.itso.ibm.com Starting Chef Client on Node
itsokvm1.itso.ibm.com Bootstrapping Node
itsokvm1.itso.ibm.com Synchronizing Cookbooks
itsokvm1.itso.ibm.com Compiling Cookbooks
Deploying bootstrapped nodes...
Setting run list for node itsokvm1.itso.ibm.com...
itsokvm1.itso.ibm.com:
run_list: role[ibm-os-compute-node-kvmibm]
itsokvm1.itso.ibm.com Converging Node
itsokvm1.itso.ibm.com Synchronizing Cookbooks
itsokvm1.itso.ibm.com Compiling Cookbooks
itsokvm1.itso.ibm.com Running Recipe chef_handler::default
itsokvm1.itso.ibm.com Running Recipe ibm-openstack-common::cmwo-version
:
:
:
itsokvm1.itso.ibm.com Running Recipe openstack-network::openvswitch
itsokvm1.itso.ibm.com Running Recipe openstack-telemetry::agent-compute
itsokvm1.itso.ibm.com Completed
All nodes with run_order_number '1' deployed.
Results for deploy of topology 'zkvmtop'
Results for nodes with run_order_number '1'
Deploy of node at itsokvm1.itso.ibm.com was successful.
Deploy of topology 'zkvmtop.json' completed in 139 seconds.
[root@controller itso_env]#
Important: As a prerequisite, the repository of the compute node has to be enabled and
recognized by the YUM repository.
90 Getting Started with KVM for IBM z Systems
5.2.7 Cloud environment verification
In this section, we verify that OpenStack services were successfully deployed.
Compute service
From the controller node, execute the Nova service command to confirm that the compute
node is now deployed and managed by IBM Cloud Manager with OpenStack.
Example 5-11 Nova service list
[root@controller itso_env]# source ~/openrc
[root@controller itso_env]# nova service-list
+----+------------------+-------------------------+----------+---------+-------+---------------------------
| Id | Binary | Host | Zone | Status | State | Updated_at
+----+------------------+-------------------------+----------+---------+-------+---------------------------
| 1 | nova-conductor | controller.itso.ibm.com | internal | enabled | up | 2015-09-29T02:13:11.644559
| - |
| 4 | nova-scheduler | controller.itso.ibm.com | internal | enabled | up | 2015-09-29T02:13:13.412254
| - |
| 5 | nova-consoleauth | controller.itso.ibm.com | internal | enabled | up | 2015-09-29T02:13:20.666483
| - |
| 6 | nova-cert | controller.itso.ibm.com | internal | enabled | up | 2015-09-29T02:13:17.528850
| - |
| 21 | nova-compute | itsokvm1.itso.ibm.com | nova | enabled | up | 2015-09-29T02:13:13.291343
| -
|+----+------------------+-------------------------+----------+---------+-------+--------------------------
Network services
Every network service or extension in the cloud environment registers itself with the Neutron
server when the server or extension starts. For this reason, it is best to determine whether the
compute node network agents are registered with the controller (see Example 5-12). To an
extent, this also verifies that the environment is deployed correctly.
Example 5-12 Neutron agent list
[root@controller ~]# neutron agent-list
+--------------------------------------+--------------------+-------------------------+-------+------------
| id | agent_type | host | alive |
admin_state_up | binary |
+--------------------------------------+--------------------+-------------------------+-------+------------
| b0cacb05-a5c7-4a50-9c18-e1646a8ba950 | DHCP agent | controller.itso.ibm.com | :-) | True
| neutron-dhcp-agent |
| 67e56dfa-9f0d-432e-b8c7-b17ef42516d1 | L3 agent | controller.itso.ibm.com | :-) | True
| neutron-l3-agent |
| e1be4b6c-9855-4a43-ab2a-a8d76db61cfa | Metadata agent | controller.itso.ibm.com | :-) | True
| neutron-metadata-agent |
| f84df0fc-e243-4375-9482-217efc73d1e4 | Open vSwitch agent | controller.itso.ibm.com | :-) | True
| neutron-openvswitch-agent |
| 4ceb8347-5b0f-46a3-98e8-10ef1c2428e4 | Loadbalancer agent | controller.itso.ibm.com | :-) | True
| neutron-lbaas-agent |
| 23e194b5-f825-4bca-9db2-407065a9b569 | Open vSwitch agent | itsokvm1.itso.ibm.com | :-) | True
| neutron-openvswitch-agent |
+--------------------------------------+--------------------+-------------------------+-------+----------------+--------------------------
Important: The Nova compute node must have a status of enabled, as shown in
Example 5-11. Otherwise, the controller will not communicate with the compute node.
Chapter 5. Building a cloud environment 91
5.2.8 Accessing IBM Cloud Manager 4.3 with OpenStack
When the deployment is complete, the services of IBM Cloud Manager with OpenStack are
ready to use. The IBM Cloud Manager Dashboard is available at this location:
https://controller.<domainname>
Where <domainname> is the fully qualified domain name of the controller node in your
topology
The IBM Self-Service user interface is accessible at this location:
https://controller.<domainname>:8080
Figure 5-2 shows the IBM Cloud Manager V4.3 dashboard.
Figure 5-2 IBM Cloud Manager with OpenStack Dashboard
IBM Cloud Manager virtual machine deployment
After deploying the components for creating a cloud environment, we need to set up the
network for the VM to use. With the IBM Cloud Manager Dashboard, you can create several
types of networks for the VMs. The type of network you create depends on the type of
network connectivity and the hypervisor you are using. There are three processes to carry
out, which are described in the following sections:
“Create a network” on page 91
“Upload the image to the cloud” on page 93
“Launch an instance for deployment” on page 94
Create a network
To create a network and specify the network provider settings:
1. Log in to the dashboard, and select Admin > System Panel > Networks.
2. Click Create Network, and the window shown in Figure 5-3 on page 92 opens. (You
cannot create a subnet by using this method. The subnet is created in the next step.)
92 Getting Started with KVM for IBM z Systems
Figure 5-3 Creating a gre network
3. After the network is created, create a subnet by clicking the newly created network and
providing requested the network information about the new cloud environment. See
Figure 5-4.
Figure 5-4 Adding a subnet and entering network information
Chapter 5. Building a cloud environment 93
Using virsh, we installed and created SLES 12 Linux QCOW2 images. For more information
about creating QCOW2 images, see 3.3.1, “Preparing the environment” on page 58
Cloud-init
Cloud-init is a multi-distribution package that handles early initialization of a cloud instance.
The cloud-init software package is supported by IBM Cloud Manager with OpenStack and
can be used to pass boot-time customization of virtual images (for example: server metadata,
user data, personality files, and SSH keys). The config drive can be accessed by any guest
operating system capable of mounting an ISO9960 file system. Images that are built with a
recent version of the cloud-init software package, can automatically access and apply the
supported customization values that are passed to the instance by the config drive.
Download the cloud-init .tar file from the Launchpad website:
https://launchpad.net/cloud-init/+download
You will also need the setuptools package installed in your target system for cloud-init to work.
For more information about setuptools, see the following website:
https://pypi.python.org/pypi/setuptools
Upload the image to the cloud
With the network setup completed, you can upload the Linux image to the cloud. To do so:
1. Log in to the dashboard and select Admin > System Panel > Images.
2. Click Create Image. (Note that you cannot create a subnet by using this method. The
subnet is created in the next step.) The window displays as shown in Figure 5-5.
Figure 5-5 Importing an image file to the cloud
94 Getting Started with KVM for IBM z Systems
3. With the relevant information provided, click OK to import the image to IBM Cloud
Manager.
Launch an instance for deployment
After the image is imported, you can launch an instance for deployment:
1. Log in to the dashboard, select Projects > Compute Panel > Instances.
2. Click Launch Image, and the window shown in Figure 5-6 opens.
Figure 5-6 Launch instance
3. After the deployment, click Project> Compute > Instances and notice that the instance is
listed. The instance will also be listed in the IBM Self-Service Portal, as shown in
Figure 5-7.
Figure 5-7 IBM Self-Service portal
© Copyright IBM Corp. 2015. All rights reserved. 95
Appendix A. Installing KVM for IBM z Systems
with ECKD devices
This appendix describes some of the differences between KVM for IBM z Systems installation
on Small Computer System Interface (SCSI) devices as shown in 3.2, “Setting up KVM for
IBM z Systems” on page 31 and installation on ECKD devices.
Parameter file
It is possible to specify ECKD devices in the .prm file the same way that we did for SCSI
devices in 3.2.1, “Preparing the .ins and .prm files” on page 32.
Example A-1 shows a parameter file that specifies ECKD devices for the installer.
Example A-1 Parameter file
ro ramdisk_size=40000 rd.dasd=0.0.6500,0.0.6501
rd.znet=qeth,0.0.2d00,0.0.2d01,0.0.2d02,layer2=1,portno=0,portname=DUMMY
ip=192.168.60.71::192.168.60.1:255.255.255.0:itsokvm2:enccw0.0.2d00:none
inst.repo=ftp://ftp:ftp@192.168.60.15/KVM/DVD1
The rd.dasd statement defines two ECKD devices. All other statements are the same as for
SCSI installation.
A
96 Getting Started with KVM for IBM z Systems
Figure A-1 shows the results during KVM for IBM z Systems installation. It is not possible to
add ECKD devices in this panel. They must be defined in the parameter file.
Figure A-1 Devices for installation
© Copyright IBM Corp. 2015. All rights reserved. 97
Appendix B. Installing IBM Cloud Manager
with OpenStack
This appendix describes the steps required to install IBM Cloud Manager with OpenStack.
Prerequisites
Before installing IBM Cloud Manager with OpenStack, be sure that the prerequisites are met.
Yum repository
The first and foremost prerequisite that needs to be met before deployment is to create
repositories for the controller node OS and its optional operating system packages. If you are
not connected to a network for downloading the repositories from the RedHat website
(http://redhat.com), you have the option to create your own local repositories using the
RHEL7.1 repository and the optional RHEL 7.1 packages (see Example B-1).
Example B-1 Local yum repository
[root@controller ~]# yum repolist
Loaded plugins: langpacks, product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use
subscription-manager to register.
repo id repo name Status
local RHEL 7.1 linux yum repository 4,371
optional RHEL 7.1 linux Optional yum repository 3,194
repolist: 8,565
B
98 Getting Started with KVM for IBM z Systems
Host name
The host where you install the controller server must have a fully qualified domain name that
includes the domain suffix. For example, the fully qualified domain name would be
mydeploymentserver.ibm.com rather than mydeploymentserver. To verify that the controller
and compute node have fully qualified domain names, use the command shown in
Example B-2.
Example B-2 Verification of fully qualified domain name
[root@controller ~]# hostname
controller.itso.ibm.com
[root@controller ~]#
The host name of the controller system must be added to the DNS system. To verify that the
host name is resolvable, issue the command shown in Example B-3.
Example B-3 Verification of resolvable host name
[root@controller ~]# hostname -f
controller.itso.ibm.com
[root@controller ~]#
Security-Enhanced Linux (SELinux)
For ease of deployment, we dynamically disabled SELinux in the controller node, as shown in
Example B-4.
Example B-4 Disabling SELinux
[root@controller ICM43]# getenforce
Enforcing
[root@controller ICM43]# setenforce Permissive
[root@controller ICM43]# getenforce
Permissive
[root@controller ICM43]#
Network Time Protocol
Another important prerequisite is to ensure that you synchronize the deployment server with
the Network Time Protocol (NTP) server.
Before you can deploy the cloud, you need to ensure that all of the nodes are synchronized
with the NTP server. If the NTP server is not available or cannot be connected to, synchronize
the time across the controller and compute nodes manually. Some deviation is acceptable.
Appendix B. Installing IBM Cloud Manager with OpenStack 99
Installing IBM Cloud Manager 4.3
To install IBM Cloud Manager, you need to either download installation packages, as we did
for examples in this book, or order a DVD that is specific to the platform on which the
controller will be installed. After the installer packages are downloaded, we provide execute
permission to all of the installable packages in the directory, as shown in Example B-5.
Example B-5 IBM Cloud Manager with OpenStack 4.3 installable packages
[root@controller ICM43]# ls -lh
total 12G
-rwxrwxrwx. 1 root root 5.5G Jun 30 06:04 cmwo430_xlinux_install.bin
-rwxrwxrwx. 1 root root 409 Jun 30 04:39 cmwo_4.3.lic
-rwxrwxrwx. 1 root root 2.8G Jul 2 23:39 cmwo_fixpack_4.3.0.1.tar.gz
-rwxrwxrwx. 1 root root 3.5G Sep 14 04:06 cmwo_fixpack_4.3.0.3.tar.gz
-rwxrwxrwx. 1 root root 3.0K Jun 30 04:38 cmwo-install-sample.rsp
-rwxrwxrwx. 1 root root 59M Jun 30 04:39 IBM Cloud Manager with OpenStack Hyper-V
Agent.msi
-rwxrwxrwx. 1 root root 145K Jun 30 04:39 readme.pdf
-rwxrwxrwx. 1 root root 8.0K Jun 30 04:39 readme.txt
[root@controller ICM43]#
To install, run the Cloud Manager with OpenStack 4.3 binary installable package, as shown in
Example B-6. The process takes you through interactive steps during the installation.
Example B-6 IBM Cloud Manager with OpenStack 4.3 Installer
[root@controller ICM43]# ./cmwo430_xlinux_install.bin
Preparing to install...
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...
Launching installer...
===============================================================================
Choose Locale...
----------------
1- Deutsch
->2- English
...
CHOOSE LOCALE BY NUMBER: 2
Important: IBM Cloud Manager with OpenStack 4.3 packages are available only for
x86_64 and ppc64 platforms. At the time of writing, z Systems drivers are supported only
as compute node services.
100 Getting Started with KVM for IBM z Systems
Further in the process, after the terms and conditions are accepted, the installer displays a
preinstallation summary, as shown in Example B-7.
Example B-7 Installer Pre-Installation Summary
===============================================================================
Pre-Installation Summary
------------------------
Please Review the Following Before Continuing:
Product Name:
IBM Cloud Manager with OpenStack
Install Folder:
/opt/ibm/cmwo
PRESS <ENTER> TO CONTINUE:
Continuing with the interactive process, at one point the installer begins the installation of
packages in the controller server. It takes a while for the installation to complete. Example B-8
shows the notification that installation is complete.
Example B-8 Successful installation message and next step from the installer
===============================================================================
Installation Complete
---------------------
The deployment server for IBM Cloud Manager with OpenStack has been
successfully installed to:
/opt/ibm/cmwo
The next step is to select a topology and deploy the components that are
necessary to create your cloud environment.
To deploy from a web browser, use the following URL to Launch IBM Cloud Manager
- Deployer:
https://controller.itso.ibm.com:8443
To deploy from the command line, go to IBM Knowledge Center, select the product
release, and see the deployment section.
Appendix B. Installing IBM Cloud Manager with OpenStack 101
Verifying IBM Cloud Manager installation
The installation was successful, as shown in Example B-8 on page 100. However, a
suggested practice is to view the logs for any errors or warnings. The installation logs are
available in the /opt/ibm/cmwo/_installation/Logs directory and can typically be identified
uniquely by the date and time of the installation.
Example B-9 shows a sample log file.
Example B-9 Installation log location
[root@controller Logs]# pwd
/opt/ibm/cmwo/_installation/Logs
[root@controller Logs]# ls -la
total 672
drwxrwxr-x. 2 root root 4096 Sep 22 16:47 .
drwxr-x---. 3 root root 4096 Sep 22 16:47 ..
-rwxr-xr-x. 1 root root 677109 Sep 22 16:47
IBM_Cloud_Manager_with_OpenStack_Install_09_22_2015_16_28_25.log
[root@controller Logs]#
Next, verify that the Chef server is installed properly and is running without any issues.
Example B-10 shows a sample.
Example B-10 Chef server status
[root@controller ~]# chef-server-ctl status
run: bookshelf: (pid 4824) 767s; run: log: (pid 30281) 1135s
run: nginx: (pid 4860) 767s; run: log: (pid 30451) 1131s
run: oc_bifrost: (pid 4866) 766s; run: log: (pid 29992) 1142s
run: oc_id: (pid 4896) 766s; run: log: (pid 30029) 1141s
run: opscode-erchef: (pid 4928) 765s; run: log: (pid 30323) 1134s
run: opscode-expander: (pid 4934) 764s; run: log: (pid 30216) 1137s
run: opscode-expander-reindexer: (pid 4949) 764s; run: log: (pid 30224) 1136s
run: opscode-solr4: (pid 4959) 763s; run: log: (pid 30105) 1138s
run: postgresql: (pid 4966) 763s; run: log: (pid 29947) 1143s
run: rabbitmq: (pid 4973) 763s; run: log: (pid 29909) 1149s
run: redis_lb: (pid 5067) 762s; run: log: (pid 30428) 1132s
The components that are necessary for creating a cloud environment are installed and ready
for use.
Applying IBM Cloud Manager with OpenStack 4.3 fix packs
While writing this book, we downloaded the latest fix pack (Fix Pack 3) to update Chef
cookbooks and other resources that are stored on our controller server. Any necessary fix
packs can be downloaded from IBM Fix Central:
http://www.ibm.com/support/fixcentral/
After the download, the fix packs need to be stored locally in your controller system.
The extraction of a fix pack is shown in Example B-11.
Example B-11 Extracting the fix pack
[root@controller ICM43]# tar -zxvf cmwo_fixpack_4.3.0.3.tar.gz
102 Getting Started with KVM for IBM z Systems
After the fix pack is extracted from its compressed format, recheck the IBM Cloud Manager
with OpenStack 4.3 installation logs to ensure that there are no errors reported during
installation.
Upon confirmation, apply the patch as shown in Example B-12.
Example B-12 Applying the fix pack
[root@controller ICM43]# ./install_cmwo_fixpack.sh
09/23/2015 10:49:00 AM Starting installation of fix pack for IBM Cloud Manager
with OpenStack 4.3.
09/23/2015 10:49:00 AM Installed version is 4.3.0.0-20150514-1836 .
09/23/2015 10:49:00 AM Fix pack is 4.3.0.3 F20150909-2056.
09/23/2015 10:49:01 AM Copying product files...
09/23/2015 10:51:28 AM Copy successful.
09/23/2015 10:51:28 AM Running post-install fix pack scripts...
09/23/2015 10:56:35 AM Post-install scripts completed successfully.
09/23/2015 10:56:35 AM IBM Cloud Manager with OpenStack fix pack installed
successfully.
Fix pack install logs archived as
/opt/ibm/cmwo/version/install_cmwo_fixpack_2015-09-23_10_56_35_logs.zip.
[root@controller ICM43]#
After the fix pack is applied to the IBM Cloud Manager with OpenStack installation, we
suggest verifying that the fix pack logs do not include errors or warning messages.
© Copyright IBM Corp. 2015. All rights reserved. 103
Appendix C. Basic setup and use of zHPM
This appendix describes our first steps with the IBM z Systems Hypervisor Performance
Manager (zHPM), which is used to bring a goal-oriented approach to the performance
management of a hypervisor.
Example C-1 shows the commands for enabling and starting zHPM.
Example C-1 Enabling and starting zhpmd
[root@itsokvm1 ~]# systemctl enable zhpmd
ln -s '/usr/lib/systemd/system/zhpmd.service'
'/etc/systemd/system/multi-user.target.wants/zhpmd.service'
[root@itsokvm1 ~]# systemctl start zhpmd
In our environment, we used the root user ID, which already has all authorities needed to
manage zHPM. Example C-2 shows how to add a non-root user ID to the appropriate groups
so that the user ID is authorized to use zHPM.
Example C-2 Allowing root to manage zHPM
[root@itsokvm1 ~]# usermod -a -G zhpmuser,zhpmadm non-root
Example C-3 shows that, by default, CPU management was not enabled. It also shows how it
was enabled.
Example C-3 Enabling zHPM CPU management
[root@itsokvm1 ~]# zhpm config --insecure
zHPM CPU Management is off
[root@itsokvm1 ~]# zhpm config --cpu-mgmt on --insecure
zHPM CPU Management is on
C
Terminology: The term virtual server is used throughout this appendix and is equivalent
to a virtual machine.
104 Getting Started with KVM for IBM z Systems
Then, we created a new workload resource group named darling. Example C-4 shows how it
was created and lists all of the defined workload resource groups and a default definition of
the new group, together with a default policy and service class.
Example C-4 Creating and displaying a workload resource group
[root@itsokvm1 ~]# zhpm wrg-create --wrg-name darling --insecure
Created new workload resource group: b7f5fead-20d1-4edf-9386-d6f8b8332b54
[root@itsokvm1 ~]# zhpm wrg-display --insecure
Wrg-Id Wrg-Name BI #VS
------------------------------------ -------------------------------- ------ ---
b28ccaf1-ee6d-4bd2-86a4-4eb5e51f3db6 zHPMDefaultWorkloadResourceGroup medium 7
b7f5fead-20d1-4edf-9386-d6f8b8332b54 darling medium 0
[root@itsokvm1 ~]# zhpm wrg-display --wrg-name darling --insecure --json
{
"workload-resource-groups": [
{
"wrg-info": {
"resource-uri": "/zhpm/wsapi/v1/workload-resource-groups/b7f5fea
d-20d1-4edf-9386-d6f8b8332b54",
"resource-id": "b7f5fead-20d1-4edf-9386-d6f8b8332b54",
"name": "darling",
"description": ""
},
"performance-policy": {
"perf-policy-info": {
"name": "zHPMDefaultPerformancePolicy",
"description": "zHPM Generated Default Performance Policy",
"last-modified-date": 1444991826275,
"last-modified-by": "root",
"business-importance": "medium"
},
"service-classes": [
{
"name": "zHPMDefaultServiceClass",
"description": "zHPM generated default service class",
"business-importance": "medium",
"velocity-goal": "moderate",
"cpu-critical": false,
"virtual-server-name-filters": [
".*"
]
}
]
},
"virtual-servers": []
}
]
}
Appendix C. Basic setup and use of zHPM 105
We added a virtual machine, linux80, to the darling workload resource group. Example C-5
shows how we added the virtual server and displayed information about all workload resource
groups. The darling group now contains one virtual server. Because all virtual servers have
the same goals, there are no dynamic resource adjustments reported by running the
ra-display command.
Example C-5 Adding a virtual machine and displaying information
[root@itsokvm1 ~]# zhpm --insecure vs-wrg-add --vs-name linux80 --wrg-name darli
ng
Successfully associated workload resource group to virtual server
[root@itsokvm1 ~]# zhpm wrg-display --insecure
Wrg-Id Wrg-Name BI #VS
------------------------------------ -------------------------------- ------ ---
b28ccaf1-ee6d-4bd2-86a4-4eb5e51f3db6 zHPMDefaultWorkloadResourceGroup medium 6
b7f5fead-20d1-4edf-9386-d6f8b8332b54 darling medium 1
[root@itsokvm1 ~]# zhpm ra-display --insecure
No dynamic resource adjustments have occurred over duration (60min)
zHPM CPU Management is on
We created new policy and service class definitions and updated the darling workload
resource group with this information, as shown in Example C-6. Virtual servers managed by
this policy have higher velocity goals and higher importance than the default virtual servers
managed by the default policy.
Example C-6 Updating policy and displaying information
[root@itsokvm1 ~]# cat darling.pol
{
"performance-policy": {
"perf-policy-info": {
"name": "Darling",
"description": "Policy for darling workload",
"business-importance": "high"
},
"service-classes": [
{
"name": "ServiceClass1",
"description": "service class",
"business-importance": "high",
"velocity-goal": "fast",
"cpu-critical": false,
"virtual-server-name-filters":
[".*"]
}]
}
}
[root@itsokvm1 ~]# zhpm --insecure wrg-update --wrg-name darling --perf-policy darling.pol
Successfully set performance policy for workload resource group:
b7f5fead-20d1-4edf-9386-d6f8b8332b54
[root@itsokvm1 ~]# zhpm wrg-display --wrg-name darling --insecure
Wrg-Id Wrg-Name BI #VS
------------------------------------ -------- ------- ---
b7f5fead-20d1-4edf-9386-d6f8b8332b54 darling high 1
106 Getting Started with KVM for IBM z Systems
[root@itsokvm1 ~]# zhpm wrg-display --insecure
Wrg-Id Wrg-Name BI #VS
------------------------------------ -------------------------------- ------- ---
b28ccaf1-ee6d-4bd2-86a4-4eb5e51f3db6 zHPMDefaultWorkloadResourceGroup medium 6
b7f5fead-20d1-4edf-9386-d6f8b8332b54 darling high 1
Because the linux80 virtual machine now has more demanding than competing virtual
servers do, we see dynamic resource adjustments in the ra-display output, as shown in
Example C-7. Virtual servers with less important goals are CPU donors for linux80.
Example C-7 Displaying dynamic resource adjustments
[root@itsokvm1 ~]# zhpm ra-display --insecure
Adj-Time Type CPU-SB CPU-SA Vs-Name Wrg-Name
------------------- -------- ------ ------ ----------------- --------------------------------
2015-10-16 07:12:56 receiver 1024 1084 linux80 darling
donor 1024 1012 linux84 zHPMDefaultWorkloadResourceGroup
donor 1024 1012 instance-00000003 zHPMDefaultWorkloadResourceGroup
donor 1024 1012 linux83 zHPMDefaultWorkloadResourceGroup
donor 1024 1012 linux85 zHPMDefaultWorkloadResourceGroup
donor 1024 1012 linux82 zHPMDefaultWorkloadResourceGroup
2015-10-16 07:13:56 receiver 1084 1154 linux80 darling
donor 1012 998 linux84 zHPMDefaultWorkloadResourceGroup
donor 1012 998 instance-00000003 zHPMDefaultWorkloadResourceGroup
donor 1012 998 linux83 zHPMDefaultWorkloadResourceGroup
donor 1012 998 linux85 zHPMDefaultWorkloadResourceGroup
donor 1012 998 linux82 zHPMDefaultWorkloadResourceGroup
Adj-Time Reason R-Vs-Name R-Wrg-Name
-------- ------ --------- ----------
No failed dynamic resource adjustments have occurred over duration (60min)
After a while, there were no more adjustments because the PI of the service class associated
with the receiver virtual machine achieved its goal. We decided to redefine the darling
workload resource group with even stricter goals, as shown in Example C-8.
Example C-8 Updating policy and displaying information
[root@itsokvm1 ~]# cat darling.pol
{
"performance-policy": {
"perf-policy-info": {
"name": "Darling",
"description": "Policy for darling workload",
"business-importance": "highest"
},
"service-classes": [
{
"name": "ServiceClass1",
"description": "service class",
"business-importance": "highest",
"velocity-goal": "fastest",
"cpu-critical": true,
"virtual-server-name-filters":
[".*"]
}]
}
Appendix C. Basic setup and use of zHPM 107
}
[root@itsokvm1 ~]# zhpm --insecure wrg-update --wrg-name darling --perf-policy darling.pol
Successfully set performance policy for workload resource group:
b7f5fead-20d1-4edf-9386-d6f8b8332b54
[root@itsokvm1 ~]# zhpm wrg-display --wrg-name darling --insecure
Wrg-Id Wrg-Name BI #VS
------------------------------------ -------- ------- ---
b7f5fead-20d1-4edf-9386-d6f8b8332b54 darling highest 1
[root@itsokvm1 ~]# zhpm wrg-display --insecure
Wrg-Id Wrg-Name BI #VS
------------------------------------ -------------------------------- ------- ---
b28ccaf1-ee6d-4bd2-86a4-4eb5e51f3db6 zHPMDefaultWorkloadResourceGroup medium 6
b7f5fead-20d1-4edf-9386-d6f8b8332b54 darling highest 1
Example C-9 shows that linux80 was able to receive another set of resources from other
virtual servers to satisfy its more demanding goal.
Example C-9 Updating policy and displaying information
[root@itsokvm1 ~]# zhpm ra-display --insecure
Adj-Time Type CPU-SB CPU-SA Vs-Name Wrg-Name
------------------- -------- ------ ------ ----------------- --------------------------------
2015-10-16 07:12:56 receiver 1024 1084 linux80 darling
donor 1024 1012 linux84 zHPMDefaultWorkloadResourceGroup
donor 1024 1012 instance-00000003 zHPMDefaultWorkloadResourceGroup
donor 1024 1012 linux83 zHPMDefaultWorkloadResourceGroup
donor 1024 1012 linux85 zHPMDefaultWorkloadResourceGroup
donor 1024 1012 linux82 zHPMDefaultWorkloadResourceGroup
2015-10-16 07:13:56 receiver 1084 1154 linux80 darling
donor 1012 998 linux84 zHPMDefaultWorkloadResourceGroup
donor 1012 998 instance-00000003 zHPMDefaultWorkloadResourceGroup
donor 1012 998 linux83 zHPMDefaultWorkloadResourceGroup
donor 1012 998 linux85 zHPMDefaultWorkloadResourceGroup
donor 1012 998 linux82 zHPMDefaultWorkloadResourceGroup
2015-10-16 07:19:41 receiver 1154 1224 linux80 darling
donor 998 984 linux84 zHPMDefaultWorkloadResourceGroup
donor 998 984 instance-00000003 zHPMDefaultWorkloadResourceGroup
donor 998 984 linux83 zHPMDefaultWorkloadResourceGroup
donor 998 984 linux85 zHPMDefaultWorkloadResourceGroup
donor 998 984 linux82 zHPMDefaultWorkloadResourceGroup
Adj-Time Reason R-Vs-Name R-Wrg-Name
-------- ------ --------- ----------
No failed dynamic resource adjustments have occurred over duration (60min)
108 Getting Started with KVM for IBM z Systems
(0.2”spine)
0.17”<->0.473”
90<->249pages
GettingStartedwithKVMforIBMzSystems
ibm.com/redbooks
Printed in U.S.A.
Back cover
ISBN 0738441201
SG24-8332-00

Getting Started with KVM for IBM z Systems

  • 1.
    Redbooks Front cover Getting Startedwith KVM for IBM z Systems Bill White Tae Min Baek Mark Ecker Marian Gasparovic Manoj S Pattabhiraman
  • 3.
    International Technical SupportOrganization Getting Started with KVM for IBM z Systems November 2015 SG24-8332-00
  • 4.
    © Copyright InternationalBusiness Machines Corporation 2015. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. First Edition (November 2015) This edition applies to Version 1, Release 1, Modification 0 of KVM for IBM z Systems (product number 5648-KVM). Note: Before using this information and the product it supports, read the information in “Notices” on page v.
  • 5.
    © Copyright IBMCorp. 2015. All rights reserved. iii Contents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi IBM Redbooks promotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Authors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Now you can become a published author, too . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x Chapter 1. KVM for IBM z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Why KVM for IBM z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Advantages of using KVM for IBM z Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 IBM z Systems and KVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Storage connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.2 Network connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.3 Hardware Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.4 Open source virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.5 What comes with KVM for IBM z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Managing the KVM for IBM z Systems environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3.1 IBM z Systems Hypervisor Performance Manager (zHPM) . . . . . . . . . . . . . . . . . . 9 1.4 Using IBM Cloud Manager with OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Chapter 2. Planning the environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1 Planning KVM for IBM z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.1.1 Hardware requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.1.2 Software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.1.3 Installation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2 Planning virtualized resources for KVM virtual machines . . . . . . . . . . . . . . . . . . . . . . . 14 2.2.1 Compute consideration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2.2 Storage consideration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2.3 Network consideration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2.4 Software consideration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.2.5 Live migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.3 Planning KVM virtual machine management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.4 Planning a cloud infrastructure with KVM and IBM Cloud Manager with OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.4.1 Planning for KVM for IBM z Systems installation . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.4.2 Planning for virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.4.3 Planning for IBM Cloud Manager with OpenStack installation . . . . . . . . . . . . . . . 22 2.4.4 Planning for IBM Cloud Manager with OpenStack deployment . . . . . . . . . . . . . . 24 Chapter 3. Installing and configuring the environment. . . . . . . . . . . . . . . . . . . . . . . . . 27 3.1 Our configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.1.1 Logical view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.1.2 Physical resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.1.3 Preparation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.2 Setting up KVM for IBM z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
  • 6.
    iv Getting Startedwith KVM for IBM z Systems 3.2.1 Preparing the .ins and .prm files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.2.2 Installing KVM for IBM z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.2.3 Configuring KVM for IBM z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.3 Deploying virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.3.1 Preparing the environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.3.2 Installing Linux on z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.3.3 Modifying domain definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.3.4 Linux on z Systems configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Chapter 4. Managing and monitoring the environment. . . . . . . . . . . . . . . . . . . . . . . . . 65 4.1 KVM on IBM z System management interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.1.1 Introduction to the libvirt management stack. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.2 Using virsh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.2.1 Basic commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.2.2 Add I/O resources dynamically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.2.3 VM live migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.3 Monitoring KVM for IBM z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.3.1 Configuring the Nagios monitoring tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Chapter 5. Building a cloud environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 5.1 Overview of IBM Cloud Manager with OpenStack V4.3 . . . . . . . . . . . . . . . . . . . . . . . . 78 5.1.1 IBM Cloud Manager with OpenStack version 4.3 . . . . . . . . . . . . . . . . . . . . . . . . . 78 5.1.2 Environmental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.2 Installing, deploying, and configuring KVM on a cloud based on IBM z Systems. . . . . 81 5.2.1 Installing and update IBM Cloud Manager with OpenStack V4.3 . . . . . . . . . . . . . 81 5.2.2 Deploying the IBM Cloud Manager topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 5.2.3 Creating a cloud environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5.2.4 Environment templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5.2.5 Creating a controller topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.2.6 Creating a compute node topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.2.7 Cloud environment verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5.2.8 Accessing IBM Cloud Manager 4.3 with OpenStack. . . . . . . . . . . . . . . . . . . . . . . 91 Appendix A. Installing KVM for IBM z Systems with ECKD devices . . . . . . . . . . . . . . 95 Parameter file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Appendix B. Installing IBM Cloud Manager with OpenStack . . . . . . . . . . . . . . . . . . . . 97 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Yum repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Host name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Security-Enhanced Linux (SELinux) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Network Time Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Installing IBM Cloud Manager 4.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Applying IBM Cloud Manager with OpenStack 4.3 fix packs . . . . . . . . . . . . . . . . . . . . 101 Appendix C. Basic setup and use of zHPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
  • 7.
    © Copyright IBMCorp. 2015. All rights reserved. v Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
  • 8.
    vi Getting Startedwith KVM for IBM z Systems Trademarks IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: DB2® DS8000® ECKD™ FICON® FlashSystem™ Global Business Services® IBM® IBM FlashSystem® IBM z™ IBM z Systems™ IBM z13™ PR/SM™ Processor Resource/Systems Manager™ Redbooks® Redbooks (logo) ® Storwize® System z® XIV® z Systems™ z/OS® z/VM® z13™ The following terms are trademarks of other companies: Linux is a trademark of Linus Torvalds in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Other company, product, or service names may be trademarks or service marks of others.
  • 9.
    IBM REDBOOKS PROMOTIONS Findand read thousands of IBM Redbooks publications Search, bookmark, save and organize favorites Get up-to-the-minute Redbooks news and announcements Link to the latest Redbooks blogs and videos Download Now Get the latest version of the Redbooks Mobile App iOS Android Place a Sponsorship Promotion in an IBM Redbooks publication, featuring your business or solution with a link to your web site. Qualified IBM Business Partners may place a full page promotion in the most popular Redbooks publications. Imagine the power of being seen by users who download millions of Redbooks publications each year! ® ® Promote your business in an IBM Redbooks publication ibm.com/Redbooks About Redbooks Business Partner Programs IBM Redbooks promotions
  • 10.
  • 11.
    © Copyright IBMCorp. 2015. All rights reserved. ix Preface This IBM® Redbooks® publication gives a broad explanation of the kernel-based virtual machine (KVM) for IBM z™ Systems and how it uses the architecture of IBM z Systems™. It focuses on the planning and design of the environment and provides installation and configuration definitions that are necessary to build and manage KVM for IBM z Systems. It also helps you plan, install, and configure IBM Cloud Manager with OpenStack for use with KVM for IBM z Systems in a cloud environment. This book is useful to IT architects and system administrators who plan for and install KVM for IBM z Systems. The reader is expected to have a good understanding of IBM z Systems hardware, KVM, Linux on z Systems, and cloud concepts. Authors This book was produced by a team of specialists from around the world working at the IBM International Technical Support Organization, Poughkeepsie Center. Bill White is a Project Leader and Senior z Systems Networking and Connectivity Specialist at IBM Redbooks, Poughkeepsie Center. Tae Min Baek is a Certified IT Architect for IBM Systems mardware in Korea. He has 16 years of experience in z Systems virtualization, IBM z/OS®, IBM z/VM®, and Linux operating systems. Currently, he works in Technical Sales for Linux on z Systems and as a benchmark center leader in Korea. He also provides technical support for Linux on z Systems cloud solutions, porting local ISV solutions, the PoC/benchmark test, and the implementation project. Mark Ecker is a certified z Systems Client Technical Specialist in the United States. He has worked for IBM for 17 years in the z Systems field. His areas of expertise include capacity planning, solution design, and deep knowledge of the z Systems platform. Mark is also a co-author of IBM Enterprise Workload Manager V2.1, SG24-6785 Marian Gasparovic is an IT Specialist working for the IBM Systems Group in IBM Slovakia. After working as a z/OS administrator with an IBM Business Partner, he joined IBM as a storage specialist. Later, he worked as a Field Technical Sales Specialist and was responsible for new workloads. He joined Systems Lab Services and Training in 2010. His main area of expertise is virtualization on z Systems. He is a co-author of several IBM Redbooks publications. Manoj S Pattabhiraman is an IBM Certified Senior IT Specialist from the IBM Benchmarking Center, Singapore. He has more than 14 years of experience in IBM System z® virtualization, cloud, and Linux on System z. In his current role, he leads the System z benchmarking team in Singapore and also provides consultation and implementation services for various Linux on System z customers across ASEAN region. Manoj has contributed to several z/VM and Linux on System z related IBM Redbooks publications, and has been a frequent presenter at various technical conferences and workshops on z/VM and Linux on System z. Thanks to the following people for their contributions to this project: Ella Buslovich and Karen Lawrence IBM Redbooks
  • 12.
    x Getting Startedwith KVM for IBM z Systems Dave Bennin, Don Brennan, Rich Conway, and Bob Haimowitz IBM Global Business Services®, Development Support Team Zhuo Hua Li and Hong Jin Wei IBM China Klaus Smolin, Tony Gargya, and Viktor Mihajlovski IBM Germany Now you can become a published author, too Here’s an opportunity to spotlight your skills, grow your career, and become a published author—all at the same time. Join an ITSO residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base. Find out more about the residency program, browse the residency index, and apply onlinet: ibm.com/redbooks/residencies.html Comments welcome Your comments are important to us. We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form: ibm.com/redbooks Send your comments by email: redbooks@us.ibm.com Mail your comments: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400 Stay connected to IBM Redbooks Find us on Facebook: http://www.facebook.com/IBMRedbooks Follow us on Twitter: http://twitter.com/ibmredbooks Look for us on LinkedIn:
  • 13.
    Preface xi http://www.linkedin.com/groups?home=&gid=2130806 Explore newRedbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter: https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm Stay current on recent Redbooks publications with RSS Feeds: http://www.redbooks.ibm.com/rss.html
  • 14.
    xii Getting Startedwith KVM for IBM z Systems
  • 15.
    © Copyright IBMCorp. 2015. All rights reserved. 1 Chapter 1. KVM for IBM z Systems This chapter is an introduction to open virtualization with KVM for IBM z Systems and a description of how the environment can be managed. It covers the following topics: Why KVM for IBM z Systems IBM z Systems and KVM Managing the KVM for IBM z Systems environment Using IBM Cloud Manager with OpenStack 1 Terminology: The terms virtual server and virtual machine are interchangeable. Both terms are use throughout this book, depending on the component being discussed.
  • 16.
    2 Getting Startedwith KVM for IBM z Systems 1.1 Why KVM for IBM z Systems Today’s systems must be able to scale up and scale out, not only in terms of performance and size, but also in functions. Virtualization is a core enabler of system capability, but open source and standards are key to making virtualization effective. KVM for IBM z Systems is an open source virtualization option for running Linux-centric workloads, using common Linux-based tools and interfaces, while taking advantage of the robust scalability, reliability, and security that is inherent to the IBM z Systems platform. The strengths of the z Systems platform have been developed and refined over several decades to provide additional value to any type of IT-based services. KVM for IBM z Systems can manage and administer multiple virtual machines, allowing for large numbers of Linux-based workloads to run simultaneously on the z Systems platform. z Systems platforms also have a long history of providing security for applications and sensitive data in virtual environments. It is the most securable platform in the industry, with security integrated throughout the stack in hardware, firmware, and software. 1.1.1 Advantages of using KVM for IBM z Systems KVM for IBM z Systems offers enterprises a cost-effective alternative to other hypervisors. It has simple and familiar standard user interfaces, offering easy integration of the z Systems platform into any IT infrastructure. KVM for IBM z Systems can be managed to allow for over-commitment of system resources to optimize the virtualized environment. This is described in 2.2.1, “Compute consideration” on page 14. In addition, KVM for IBM z Systems can help make platform mobility easier. Its live relocation capabilities enable you to move virtual machines and workloads between multiple instances of KVM for IBM z Systems without incurring downtime. Table 1-1 lists some of the key features and benefits of KVM for IBM z Systems. Note: Both KVM for IBM z Systems and Linux on z Systems are the same KVM and Linux that run on other hardware platforms, with the same look and feel.
  • 17.
    Chapter 1. KVMfor IBM z Systems 3 Table 1-1 KVM for IBM z Systems key features 1.2 IBM z Systems and KVM The z Systems platform is highly virtualized, with the goal of maximizing the use of compute and I/O (storage and network) resources, and simultaneously lowering the total amount of resources needed for your workloads. For decades, virtualization has been embedded in z Systems architecture and built into the hardware and firmware. Virtualization requires a hypervisor, which manages resources that are required for multiple independent virtual machines. Hypervisors can be implemented in software or hardware, and z Systems has both. The hardware hypervisor is known as IBM Processor Resource/Systems Manager™ (PR/SM™). PR/SM is implemented in firmware as part of the base system. It fully virtualizes the system resources and does not require additional software to run. KVM for IBM z is a software hypervisor that uses PR/SM functions to service its virtual machines. PR/SM enables defining and managing subsets of the z Systems resources in logical partitions (LPARs). Each KVM for IBM z instance runs in a dedicated LPAR. The LPAR definition includes several logical processing units (LPUs), memory, and I/O resources. LPUs are defined and managed by PR/SM and are perceived by KVM for IBM z as real CPUs. PR/SM is responsible for accepting requests for work on LPUs and dispatching that work on physical CPUs. LPUs can be dynamically added to and removed from an LPAR. LPARs can be added, modified, activated, or deactivated in z Systems platforms using the Hardware Management Console (HMC). Feature Benefits KVM hypervisor Supports running multiple disparate Linux virtual machines on a single system CPU sharing Allows for the sharing of CPU resources by virtual machines I/O sharing Enables the sharing of I/O resources among virtual machines Memory and CPU over-commitment Supports the over-commitment of CPU, memory, and swapping of inactive memory Live virtual machine relocation Enables workload migration with minimal impact Dynamic addition and deletion of virtual I/O devices Reduces downtime to modify I/O device configurations for virtual machines Thin-provisioned virtual machines Allows for copy-on-write virtual disks to save on storage Hypervisor performance management Supports policy based, goal-oriented management and monitoring of virtual CPU resources Installation and configuration tools Supplies tools to install and configure KVM for IBM z Systems Transactional execution use Provides improved performance for running multi-threaded applications
  • 18.
    4 Getting Startedwith KVM for IBM z Systems KVM for IBM z Systems also uses PR/SM to access storage devices and the network for Linux on z Systems virtual machines (see Figure 1-1). Figure 1-1 KVM running in z Systems LPARs 1.2.1 Storage connectivity Storage connectivity is provided on the z Systems platforms by host bus adapters (HBAs) called Fibre Connection (IBM FICON®) features. IBM FICON (FICON Express16S and FICON Express8S) features follow Fibre Channel (FC) standards. They support data storage and access requirements and the latest FC technology in storage devices. The FICON features support the following protocols: Native FICON An enhanced protocol (over FC) that provides for communication with FICON devices, such as disks, tapes, and printers. Native FICON supports IBM Extended Count Key Data (ECKD™) devices. Fibre Channel Protocol (FCP) A standard protocol for communicating with disk and tape devices. FCP supports small computer system interface (SCSI) devices. Linux on z Systems and KVM for IBM z Systems can use both protocols by using the FICON features.
  • 19.
    Chapter 1. KVMfor IBM z Systems 5 1.2.2 Network connectivity Network connectivity is provided on the z Systems platform by the network interface cards (NICs) called Open Systems Adapter (OSA) features. The OSA features (OSA-Express5S, OSA-Express4S, and OSA-Express3) provide direct, industry-standard local area network (LAN) connectivity and communication in a networking infrastructure. OSA features use the z Systems I/O architecture, called queued direct input/output (QDIO). QDIO is a highly efficient data transfer mechanism that uses system memory queues and a signaling protocol to directly exchange data between the OSA microprocessor in the feature and the network stack running in the operating system. KVM for IBM z Systems can use the OSA features by virtualizing them for Linux on z Systems to use. For more information about storage and network connectivity for Linux on z Systems, see TThe Virtualization Cookbook for IBM z Systems Volume 3: SUSE Linux Enterprise Server 12, SG24-8890: http://www.redbooks.ibm.com/abstracts/sg248890.html 1.2.3 Hardware Management Console The Hardware Management Console (HMC) is a stand-alone computer that runs a set of management applications. The HMC is a closed system, which means that no other applications can be installed on it. The HMC can set up, manage, monitor, and operate one or more z Systems platforms. It manages and provides support utilities for the hardware and its LPARs. The HMC is used to install KVM for IBM z Systems and to provide an interface to the IBM z Systems hardware for configuration management functions. For details about the HMC, see Introduction to the Hardware Management Console in the IBM Knowledge Center: http://ibm.co/1PD5gFi 1.2.4 Open source virtualization Kernel-based virtual machine (KVM) technology is a cross-platform virtualization technology that turns the Linux kernel into an enterprise-class hypervisor by using the hardware virtualization support built into the z Systems platform. This means that KVM for IBM z Systems can do things such as scheduling tasks, dispatching CPUs, managing memory, and interacting with I/O resources (storage and network) through PR/SM. KVM for IBM z Systems creates virtual machines as Linux processes that run Linux on z Systems images using a modified version of another open source module, known as a quick emulator (QEMU). QEMU provides I/O device emulation and device virtualization inside the virtual machine. The KVM for IBM z Systems kernel provides the core virtualized infrastructure. It can schedule virtual machines on real CPUs and manage their access to real memory. QEMU runs in a user space and implements virtual machines using KVM module functions.
  • 20.
    6 Getting Startedwith KVM for IBM z Systems QEMU virtualizes real storage and network resources for a virtual machine, which, in turn, uses virtio drivers to access these virtualized resources, as shown in Figure 1-2. Figure 1-2 Open source virtualization: KVM for IBM z Systems The network interface in Linux on z Systems is a virtual Ethernet interface. The interface name is eth. Multiple Ethernet interfaces can be defined to Linux and are handled by the virtio_net device driver module. In Linux, a generic virtual block device is used rather than specific devices, such as ECKD or SCSI devices. The virtual block devices are handled by the virtio_blk device driver module. For information about KVM, see KVM — an open cross-platform virtualization alternative, a smarter choice: http://www.ibm.com/systems/virtualization/kvm/ Browse KVM for IBM z Systems product publications in the IBM Knowledge Center: http://www.ibm.com/support/knowledgecenter/linuxonibm/liaaf/lnz_r_kvm.html
  • 21.
    Chapter 1. KVMfor IBM z Systems 7 1.2.5 What comes with KVM for IBM z Systems KVM for IBM z Systems provides standard Linux and KVM interfaces for operational control of the environment, such as standard drivers and application programming interfaces (APIs), as well as system emulation support and virtualization management. Included as part of KVM for IBM z Systems are the following components: The command-line interface (CLI) is a common, familiar Linux interface environment used to issue commands and interact with the KVM hypervisor. The user issues a series of successive lines of commands to change or control the environment. Libvirt is open source software that resides on KVM and many other hypervisors to provide low-level virtualization capabilities that interface with KVM through a CLI called virsh. A list of key virsh commands is included in “Using virsh” on page 67. The IBM z Systems Hypervisor Performance Manager (zHPM) monitors virtual machines running on KVM to achieve goal-oriented policy-based performance goals (see Appendix C, “Basic setup and use of zHPM” on page 103). Open vSwitch (OVS) is open source software that allows for network communication between virtual machines and the external networks that are hosted by the KVM hypervisor. See this website for more information: http://www.openvswitch.org MacVTap is a device driver used to virtualize bridge networking and is based on the mcvlan device driver. See this website for more information: http://virt.kernelnewbies.org/MacVTap QEMU is open source software that is a hardware emulator for virtual machines running on KVM. It also provides management and monitoring functions for the KVM virtual machines. For more information, see the QEMU.org wiki: http://wiki.qemu.org The installer offers a series of panels to assist and guide the user through the installation process. Each panel has setting selections that can be made to customize the KVM installation. See Chapter 3, “Installing and configuring the environment” on page 27 for examples of the installer panels. Nagios remote plug-in executor (NRPE) can be used with KVM for IBM z. NRPE is an addon that allows you to execute plug-ins on KVM for IBM z. With those plug-ins, you can monitor resources, such as disk usage, CPU load, and memory usage. For more information, see “Configuring the Nagios monitoring tool” on page 64.
  • 22.
    8 Getting Startedwith KVM for IBM z Systems 1.3 Managing the KVM for IBM z Systems environment KVM for IBM z Systems integrates with standard OpenStack virtualization management, which enables enterprises to easily integrate Linux servers into their infrastructure and cloud offerings. KVM for IBM z Systems supports libvirt APIs to enable CLIs (and custom scripting) to be used to administer the hypervisor. KVM can be administered using open source tools, such as virt-manager or OpenStack. KVM for IBM z Systems can also be administered and managed by using IBM Cloud Manager with OpenStack (see Figure 1-3 on page 8). IBM Cloud Manager is created and maintained by IBM and built on OpenStack. Figure 1-3 KVM for IBM z Systems management interfaces KVM for IBM z Systems can be managed just like any another KVM hypervisor by using the Linux CLI. The Linux CLI provides a familiar experience for platform management. In addition, an open source tool called Nagios can be used to monitor the KVM for IBM z Systems environment. Libvirt provides different methods of access through a layered approach, from a command line called virsh in the libvirt tools layer to a low-level API for many programming languages (see Figure 1-4). Figure 1-4 KVM management via libvirt API layers Hardware Hypervisor layer libvirtd libvirt API layer libvirt tools layer Application layer
  • 23.
    Chapter 1. KVMfor IBM z Systems 9 The main component of the libvirt software is the libvirtd daemon. This is the component that interacts directly with QEMU and the KVM kernel at the hypervisor layer. QEMU manages and monitors the KVM virtual machines by performing the following tasks: Manage the I/O between virtual machines and KVM Create virtual disks Change the state of a virtual machine: – Start a virtual machine – Stop a virtual machine – Suspend a virtual machine – Resume a virtual machine – Delete a virtual machine – Take and restore snapshots See the libvirt website for more information about libvirt: http://libvirt.org 1.3.1 IBM z Systems Hypervisor Performance Manager (zHPM) zHPM monitors and manages workload performance of the virtual machines under KVM by performing the following operations: Detect when a virtual machine is not achieving its goals when it is a member of a Workload Resource Group. Determine whether the virtual machine performance can be improved with additional resources. Project the impact on all virtual machines of the reallocation of resources. Redistribute processor resources if there is a good trade-off based on policy. For more information, see Introduction to zHPM in the IBM Knowledge Center: http://ibm.co/1japece zHPM setup instructions and examples are in Appendix C, “Basic setup and use of zHPM” on page 103. 1.4 Using IBM Cloud Manager with OpenStack OpenStack is a cloud-based operating system that controls large pools of compute, storage, and networking resources throughout a data center. It is based on the Open Stack project: http://www.openstack.org/ IBM Cloud Manager with OpenStack is an advanced management solution that is created and maintained by IBM and built on OpenStack. It can be used to get started with a cloud environment and continue to scale with users and workloads, providing advanced resource management with simplified cloud administration and full access to OpenStack APIs.
  • 24.
    10 Getting Startedwith KVM for IBM z Systems KVM for IBM z Systems compute nodes support the following OpenStack services: Nova libvirt driver Neutron agent for Open vSwitch Ceilometer support Cinder OpenStack compute node has an abstraction layer for compute drivers to support different hypervisors, including QEMU and KVM for IBM z Systems through the libvirt API layer (see Figure 1-4 on page 8).
  • 25.
    © Copyright IBMCorp. 2015. All rights reserved. 11 Chapter 2. Planning the environment This chapter describes the planning activities to carry out before installing kernel-based virtual machine (KVM) for IBM z Systems and before setting up virtual environments managed by KVM. It also covers the available management tools and provides an overview of a scenario that is implemented in this book as an example, along with the required checklists for the scenario. The information in this chapter will assist you with all of these tasks. This chapter includes the following sections: Planning KVM for IBM z Systems Planning virtualized resources for KVM virtual machines Planning KVM virtual machine management Planning a cloud infrastructure with KVM and IBM Cloud Manager with OpenStack 2
  • 26.
    12 Getting Startedwith KVM for IBM z Systems 2.1 Planning KVM for IBM z Systems The supported hardware and software need to be configured as described in this chapter before installation of KVM for IBM z Systems. An installation method also needs to be determined, as described in this section. 2.1.1 Hardware requirements The supported servers, storage hardware, and network features described in the subsections that follow need to be confirmed before the installation begins. Servers The following servers are supported only with regard to the Integrated Facilities for Linux (IFLs) that are activated: IBM z13™ IBM zEC12 IBM zBC12 Storage KVM for IBM z Systems supports small computer system interface (SCSI) devices and extended count key data (IBM ECKD) devices. You can use either SCSI or ECKD devices or both. The following storage devices are supported: SCSI devices: – IBM XIV® – IBM Storwize® V7000 – IBM FlashSystem™ – SAN Volume Controller – IBM DS8000® (FCP attached) ECKD devices: – DS8000 (IBM FICON attached) The Fibre Channel protocol (FCP) channel supports multiple switches and directors and can be placed between the IBM z Systems server and the SCSI device. This can help to provide more choices for storage solutions or the ability to use existing storage devices. ECDK devices can help to manage disks efficiently because KVM and Linux do not have to manage the I/O path or load balancing, because these are already managed by IBM z Systems hardware. You can choose SCSI devices, ECKD devices, or both for the KVM environment. Host bus adapters The following FICON features support connectivity to both SCSI and ECKD devices: FICON Express16S FICON Express8S Network interface cards The following Open Systems Adapter (OSA) features are supported: IBM OSA-Express5S IBM OSA-Express4S IBM OSA-Express3 (zEC12 and zBC12 only) With this OSA feature, KVM for IBM z Systems does not support VLANs or flat networks together with Open vSwitch1 .
  • 27.
    Chapter 2. Planningthe environment 13 Logical partitions (LPARs) for KVM When you define and allocate resources to LPARs on which KVM is installed, consider CPU and memory needs: CPU A minimum of 1 CPU (known as Integrated Facility for Linux, or IFL) must be assigned to the KVM LPAR. The suggestion is to assign no more than 36 IFLs per KVM LPAR. Memory A maximum of 8 TB of RAM can be allocated per KVM LPAR. The suggestion is to allocate no more than 1 TB of RAM per KVM LPAR. For the IBM z Systems platform, your system must be at the proper firmware or microcode level. At the time of writing, these were the appropriate levels: For z13: N98805.010 D22H Bundle 20a For zEC12 and zBC12: H49525.013 D15F Bundle 45a For more information, search the Preventative Service Planning buckets web page: http://www.software.ibm.com/webapp/set2/psp/srchBroker Search for the following PSP hardware upgrade identifiers: For the IBM z13, the PSP bucket is 2964DEVICE. For the IBM zEC12, the PSP bucket is 2827DEVICE. For the IBM zBC12, the PSP bucket is 2828DEVICE. 2.1.2 Software requirements The following software resources are required: KVM for IBM z Systems V1.1.0 (Product Number 5648-KVM) KVM for IBM z Systems can be ordered and delivered electronically using the IBM Shopz: http://www.ibm.com/software/ShopzSeries After you download the ISO file from IBM Shopz, you can use it to install from an FTP server or burn a DVD and use that for the installation. The latest available Fix Pack for KVM for IBM z Systems KVM for IBM z Systems 1.1.0.1 contains the current, cumulative fix packs. Download these from IBM Fix Central: http://www.ibm.com/support/fixcentral/ 1 Open vSwitch is a multilayer virtual switch. For details, see this website: http://openvswitch.org/.
  • 28.
    14 Getting Startedwith KVM for IBM z Systems 2.1.3 Installation methods You can install KVM for IBM z Systems using either of the following methods: From an FTP server, where the FTP server is in the same subnet as the Hardware Management Console (HMC). From a DVD (or a CD with a capacity of 800 MB or greater) that you create, containing the install images. An FTP server is also required, but this method does not require the FTP server to be in the same subnet as the IBM HMC. You will need to copy and create the .ins and .prm files that correspond with your environment and burn them with the ISO image to the physical DVD or CD. More details about performing the installation from a DVD are available in KVM for IBM z Systems: Planning and Installation Guide, SC27-8236-00 in the IBM Knowledge Center: http://ibm.co/1Qxm1BW The FTP server must be accessible from the target installation LPAR. We chose the FTP server method of installation because it has more flexibility for creating and updating the generic .prm file that is needed during installation. Before the installation, we prepared the FTP server in our scenario to be in the same subnet as the HMC. Details of the installation method from an FTP server are provided in Chapter 3, “Installing and configuring the environment” on page 27. 2.2 Planning virtualized resources for KVM virtual machines After installing KVM for IBM z Systems, you can plan and design the virtualized environments to build (including CPU, memory, storage, and network) and run the virtual machines on KVM. When adding virtual machines, you must create .xml files to define your virtual resources. The following describes the consideration of virtual resources when you define the virtual machines. 2.2.1 Compute consideration The virtual CPUs and memory can be configured, and these are available for the defined virtual machine using the vcpu and memory elements in the .xml file of your virtual machine. KVM supports CPU and memory over-commitment. To maximize performance, it is suggested that you define the minimum number of virtual CPUs and memory necessary for each virtual machine. If you allocate more virtual CPUs to the virtual machines than are needed, the system works, but this configuration can cause performance degradation as the virtual machines increase in numbers. Consider these suggestions: CPU: – The suggested over-commit ratio of CPUs is 10:1 (virtual-to-real). The real CPUs in this case are the IFLs assigned to the KVM LPAR. – Do not define more virtual CPUs to a virtual machine than the number of IFLs assigned to the KVM LPAR. The maximum number of virtual CPUs per virtual machine is 64. Note: You must prepare your own FTP server and upload the ISO file for KVM for IBM z Systems to the FTP server before installation. The installation method you select depends on the subnet of FTP server.
  • 29.
    Chapter 2. Planningthe environment 15 Memory: – The suggested over-commit ratio of memory is 2:1 (virtual-to-real). You can configure the CPU weight of a virtual machine, and you can modify it during operation. The CPU shares of a virtual machine are calculated by forming the weight-fraction of the virtual machine. CPU weight is helpful for managing your virtual machines by priority or server workload. Additional details and examples of CPU share are available under “CPU management” in KVM Virtual Server Management, SC34-2752-00: http://ibm.co/1PQkXHW 2.2.2 Storage consideration KVM supports virtualization of several storage devices on a KVM LPAR. You can typically use block devices or disk image files to connect with local storage devices on the virtual machine. Block device A virtual machine that uses block devices for local mass storage typically performs better than a virtual machine that uses disk image files. The virtual machine that uses block devices achieves lower-latency and higher throughput because it minimizes the number of software layers through which it passes. Figure 2-1 shows the block devices that QEMU can use for KVM virtual machines. Figure 2-1 Block devices for KVM virtual machines KVM sda sdb LUN 0001 LUN 0002 sdb1 sdb2 LUN 0003 LUN 0004 vm02-lv SCSI LPAR QEMU QEMU VM01 Linux VM02 Linux vda vdavdb vdb sda sdb1 sdb2 vm02-lv dasda dasdb Device 6201 Device 6202 dasdb1 dasdb2 Device 6203 Device 6204 vm04-lv ECKD QEMU QEMU VM03 Linux VM04 Linux vda vdavdb vdb dasda dasdb1 dasdb2 vm04-lv VolGroup01 VolGroup02
  • 30.
    16 Getting Startedwith KVM for IBM z Systems The following block devices are supported by QEMU: Entire devices A physical disk, such as SCSI and ECKD devices can be defined as a virtual disk of a virtual machine. A virtual machine uses all of the physical disk space that it manages. Example 2-1 shows a sample .xml file that defines a virtual disk for managing all of the disk space of the physical devices that it manages. Example 2-1 Sample .xml for entire devices of VM01 <disk type='block' device='disk'> <driver name='qemu' type='raw'/> <source dev='/dev/sda'/> <target dev='vda' bus='virtio'/> </disk> Disk partitions KVM for IBM z Systems can partition a physical disk. Each partition can be allocated to the same or different virtual machines. This can help to use large physical disk space more efficiently. Example 2-2 shows a sample .xml file to define a virtual disk to use partitions. Example 2-2 Sample .xml for disk partitions of VM01 <disk type='block' device='disk'> <driver name='qemu' type='raw'/> <source dev='/dev/sdb1'/> <target dev='vdb' bus='virtio'/> </disk> Logical volume manager (LVM) logical volumes KVM can create and manage logical volumes using LVM. This makes it easier to manage the available storage in general, and it also makes it easier to back up your virtual machines without shutting them down, thanks to LVM snapshots. Example 2-3 shows a sample .xml file to define a virtual disk to use logical volumes. Example 2-3 Sample .xml for logical volumes of VM02 <disk type='block' device='disk'> <driver name='qemu' type='raw'/> <source dev='/dev/VolGroup00/LogVol00'/> <target dev='vda' bus='virtio'/> </disk> The following requirements must be considered when choosing to use block devices: All block devices must be available and accessible to the hypervisor. The virtual machine cannot access devices that are not available from the hypervisor. You must activate or enable some block devices before you can use the block devices. For example, LVM volumes must be running.
  • 31.
    Chapter 2. Planningthe environment 17 File A disk image file is a file that represents a local hard disk to the virtual machine. This representation is a virtual hard disk. The size of the disk image file determines the maximum size of the virtual hard disk. A disk image file of 100 GB can produce a virtual hard disk of 100 GB. The disk image file is in a location outside of the virtual machine. Other than the size of the disk image file, the virtual machine cannot access any other information about the disk image file. The disk image file is in the file system of any block devices shown in Figure 2-1 on page 15 that are mounted on KVM. However, disk image files can also be located across a network connection in a remote file system, for example. The following file types are supported by QEMU: Raw A raw type of disk image file preallocates all of the storage space that the virtual machine uses when the file is created. The file resides in the KVM file system, and it requires less overhead than QEMU Copy On Write (QCOW2). Example 2-4 shows a sample .xml file to define a raw image file. Example 2-4 Sample .xml to use a raw type of disk image file <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/sl12sp0.img'/> <backingStore/> <target dev='vda' bus='virtio'/> </disk> QCOW2 QCOW uses a disk storage optimization strategy that delays the allocation of storage until it is actually needed. A QCOW2 disk image file grows as data is written. QCOW2 starts with a smaller size than the raw disk image file. QCOW2 can use the file system space of the KVM host more efficiently. Example 2-5 shows a sample .xml file that defines a QCOW2 image file. Example 2-5 Sample .xml to use QCOW2 disk image file <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/sl12sp0.qcow2'/> <target dev='vda' bus='virtio'/> </disk> A virtual machine that uses block devices for local mass storage typically performs better than a virtual machine that uses disk image files for the following reasons: Managing the file system where the disk image file is located creates an additional resource demand for I/O operations. Improper partitioning of mass storage using disk image files can cause unnecessary I/O operations.
  • 32.
    18 Getting Startedwith KVM for IBM z Systems However, disk image files provide the following benefits: Containment Many disk image files can be in a single storage unit. For example, disk image files can be located on disks, partitions, logical volumes, and other storage units. Usability Managing multiple files is easier than managing multiple disks, multiple partitions, multiple logical volumes, multiple arrays, and other storage units. Mobility You can easily move files from one location or system to another location or system. Cloning You can easily copy and modify files for new VMs to use. Sparse files save space Using a file system that supports sparse files conserves unaccessed disk space. Remote and network accessibility Files can be in file systems on remote systems that are connected by a network. 2.2.3 Network consideration KVM can provide network devices as virtual Ethernet devices by configuring direct MacVTap2 connections or Open vSwitch connections. To set up a virtual network on KVM, for the purposes of this book, we considered the following factors: For redundancy of network devices, we considered bonding two IBM Open Systems Adapters (OSAs). Both MacVTap and Open vSwitch can be configured with a bonding device. In a cloud environment, it is typical to separate the management network from the data network. For isolation between multiple networks, we prepared and set up separate OSA devices, each connected to a different network. As of this writing, Open vSwitch is supported by IBM Cloud Manager with OpenStack, but MacVTap is not yet supported. We chose to use Open vSwitch in our configuration because it is supported by IBM Cloud Manager with OpenStack. Open vSwitch also provides more flexibility and ease of management by the use of a command-line interface (CLI) and a database that stores network information, but it reduces complexity as compared to MacVTap managed by CLI and an .xml file. Important: Whether you use SCSI devices or ECKD devices, disk multipathing for virtual machines is not required. For SCSI devices, disk multipathing is handled by KVM for IBM z System. For ECKD devices, the I/O paths are handled by PR/SM in z Systems hardware. 2 MacVTap is a new device driver meant to simplify virtualized bridged networking. For more information, see http://virt.kernelnewbies.org/MacVTap
  • 33.
    Chapter 2. Planningthe environment 19 2.2.4 Software consideration To operate Linux on z Systems as a virtual machine of KVM for IBM z Systems, a Linux on z Systems distribution must be obtained from a Linux distribution partner. SUSE Linux Enterprise Server (SLES) 12 SP1 is supported in KVM for IBM z Systems hypervisor virtual machines. 2.2.5 Live migration To perform a live migration, the source and destination hosts must be connected and have access to the same or equivalent system resources, and to the same storage devices and networks. There are no restrictions on the location of the destination host; it can run on another LPAR on another server or on another z System. Consider system resources, storage, network, and performance when you prepare for the migration of a virtual machine to another host, and do so carefully. Details are available in the KVM Virtual Server Management section of the IBM Knowledge Center: http://ibm.co/1PD9s89 2.3 Planning KVM virtual machine management Libvirt3 is a management tool that installs with KVM. You can create, delete, run, stop, and manage your virtual machines using the virsh command, which is provided as part of the API. Virsh operations rely on the ability of the library to connect to a running libvirtd daemon. Therefore, the daemon must be running before using virsh. When you plan to manage a virtual environment on KVM as one of the resources in the cloud, IBM Cloud Manager with OpenStack can support it. To manage your virtual environment with IBM Cloud Manager with OpenStack, you will need to review the hardware, operating system, and software prerequisites of IBM Cloud Manager with OpenStack. IBM Cloud Manager with OpenStack supports KVM for IBM z Systems as compute nodes. You also need to consider KVM for IBM z Systems prerequisites in a virtualization environment: IBM Cloud Manager with Open Stack prerequisites http://ibm.co/1OiaXWb KVM for IBM z Systems prerequisites http://ibm.co/1PD9zRg 2.4 Planning a cloud infrastructure with KVM and IBM Cloud Manager with OpenStack In this book, we illustrate a simple scenario for building a cloud infrastructure with KVM and IBM Cloud Manager with OpenStack to evaluate the virtualization and management functions. These functions include the ability to create, delete, run, and stop the virtual machine, to create a virtual network and virtual storage, to perform live migration, and to clone a virtual machine. This section provides information to review before building your cloud environment. 3 Libvirt is a management tool that installs with KVM. Visit http://wiki.libvirt.org/page/Virtio
  • 34.
    20 Getting Startedwith KVM for IBM z Systems In this section, we describe planning considerations and information about the following situations: KVM installation Virtual machines IBM Cloud Manager with OpenStack installation IBM Cloud Manager with OpenStack deployment If you plan to build and manage a virtual environment using only KVM, skip the following sections: 2.4.3, “Planning for IBM Cloud Manager with OpenStack installation” on page 22 2.4.4, “Planning for IBM Cloud Manager with OpenStack deployment” on page 24 2.4.1 Planning for KVM for IBM z Systems installation This section describes the considerations for installing KVM for IBM z Systems. Then we outline the information required for the installation process. Planning considerations Consider the following areas before installing KVM for IBM z Systems: Number of CPUs in LPAR This depends on the number of virtual CPUs needed and the level of planned over-commitment. Amount of memory in LPAR This depends on the memory needed for the virtual machines and the level of planned memory over-commitment. DVD or FTP installation As described in 2.1.3, “Installation methods” on page 14, it is possible to start the installation from HMC using a DVD drive or from an FTP server. This depends on your environment. Type of storage Choose either SCSI or ECKD devices that KVM for IBM z Systems will use. Storage space for virtual machines Consider how to provide storage to virtual machines. For example, do you plan to use whole disks attached to virtual machines or a QCOW2 file? Do you plan to expand LVM? Number of OSA ports and networking KVM for IBM z Systems needs only one OSA port. However, to provide redundancy, it is suggested that you use a bonding interface and more than one OSA port. Networking for virtual machines Consider how your virtual machines will be connected to the LAN. For example, will you be using MacVTap or Open vSwitch? Will you use VLANs? If you will be using Open Switch, how many Open vSwitches are needed?
  • 35.
    Chapter 2. Planningthe environment 21 Information required for installation The following is a list of information that you will need during installation: FTP information IP address of the FTP server, FTP directory with required files, FTP credentials OSA device address The OSA triplet which will be used to create the KVM for IBM z Systems network interface card (NIC) Networking information For KVM for IBM z Systems, the IP address, network mask, default gateway, and host name VLAN (if needed) Parent interface of VLAN, VLAN ID DNS (if needed) IP addresses of DNS servers, search domain Network time protocol (NTP) (if needed) Addresses of NTP servers to be used by KVM for IBM z Installation disks If you are installing on SCSI devices, the following information is required to establish a path to the related storage: – FCP device address – The target WWPN (disk storage subsystem WWPN) – LUN ID If installing on ECKD devices, the DASD device address is required. Root password The password for the root user 2.4.2 Planning for virtual machines This section describes the considerations for virtual machines. Then, we outline the information required for the installation process. Planning considerations Consider the following areas before installing a virtual machine: Number of virtual CPUs Amount of memory Virtual machines need to have enough memory to avoid paging. However, too much memory for a virtual machine will leave less shared memory for other virtual machines. Installation source Storage space for virtual machines Consider how to provide storage to virtual machines. For example, do you plan to use whole disks attached to virtual machines, or a QCOW2 file? Do you plan to expand LVM?
  • 36.
    22 Getting Startedwith KVM for IBM z Systems I/O drivers Use virtio drivers. There are no specific drivers for SCSI, ECKD, and NICs in virtual machines. Multipath No disk multipathing is needed in virtual machine. All of that is handled by KVM. See the shaded box marked “Important” on page 18 for further information. Networking Plan how many virtual network adapters will be needed for a virtual machine and whether they will handle VLAN tags. Information required for installation The following list depends on the operating system that will be installed. This type of information is required during installation: FTP information (assuming FTP installation) IP address of FTP server, FTP directory with required files, FTP user identification and password Networking information Virtual machine IP address, network mask and default gateway, host name VLAN Parent interface of VLAN, VLAN ID DNS (if needed) IP addresses of DNS servers, search domain NTP (if needed) IP addresses of NTP servers to be used by the virtual machine File system layout 2.4.3 Planning for IBM Cloud Manager with OpenStack installation This section describes areas to consider for when planning to install IBM Cloud Manager with OpenStack. Then, we outline the information that is required for the installation process. If you plan to build and manage a virtual environment using only KVM, skip this section. Planning considerations Consider the following before installing IBM Cloud Manager with OpenStack: Hardware The deployment server and controller for IBM Cloud Manager with OpenStack 4.3 do not support installation on a z Systems platform. An x86 server, with its CPU, memory, disk, and NIC, is needed for the cloud environment. For detailed information about the hardware prerequisites, see IBM Cloud Manager with OpenStack hardware prerequisites in the IBM Knowledge Center: http://ibm.co/1SJUM54 Also, consider whether you will install and run the deployment server, controller, and database server on the same or separate nodes.
  • 37.
    Chapter 2. Planningthe environment 23 Operating systems At the time of writing, Red Hat Enterprise Linux Version 7.1 (64-bit) is supported for the deployment and controller servers on an x86 server. Database server Determine the database server product that will be used for IBM Cloud Manager with OpenStack databases. As of this writing, supported databases are IBM DB2®, Maria DB, and My SQL. Yum repository Use Red Hat Subscription Management or a local yum repository. Installation method Install from DVDs or by downloading and installing packages using CLI, GUI, or silent installation. Information required for installation The following information is required during installation: Networking information IP address, network mask and default gateway, host name with a fully qualified domain name that includes the domain suffix DNS server IP address of the DNS server which has the host name for the deployment server Yum repository IP address or host name of the repository server and directory Root password or user ID with root authority Root authority is required to run the installer NTP server IP addresses of NTP servers to be used by the deployment server and all nodes Systemd4 status Must be in running status because the product installer requires a functional systemd environment and systemd is used to manage the service state of the Chef server 4 systemd is a suite of basic building blocks for a Linux system. Visit http://www.freedesktop.org/wiki/Software/systemd/.
  • 38.
    24 Getting Startedwith KVM for IBM z Systems 2.4.4 Planning for IBM Cloud Manager with OpenStack deployment This section describes considerations for deploying the controller and compute node. Then, we outline the information required for the deployment process. If you plan to build and manage a virtual environment using only KVM, skip this section. Planning considerations Consider the following before deploying cloud environment components, such as the controller node, compute node, and database node: Topology There are five kinds of predefined topologies provided by IBM Cloud Manager with OpenStack. A description of each topology is shown in Table 5-1 on page 79. Consider which topology will be used. Database server Determine the database server product that will be used for IBM Cloud Manager with OpenStack databases. As of this writing, supported databases are DB2, Maria DB, and My SQL. Number of NICs Only one NIC is needed for the management network of KVM for IBM z Systems as a compute node. However, if you want virtual machines on compute node to use the DHCP and L3 services provided by Neutron5 , the controller and compute nodes must have at least two NICs: One for the management network and one for the data network. Network type Determine one of network types among local, flat, VLAN, generic routing encapsulation (GRE), and virtual extensible LAN (VXLAN). Web browsers Select a web browser on your desktop environment as the client to access the IBM Cloud Manager with OpenStack servers. These are the minimum supported versions: – Internet Explorer 11.0 with latest fix pack – Firefox 31 with latest fix pack – Chrome 38 with latest fix pack – Safari 7 with latest fix pack Information required for deployment This list depends on the topology that will be used, but this type of information is usually required during installation: Controller node Environment name IP address Network interface name Open vSwitch network type Fully qualified domain name The root user login information, either password or Secure Shell (SSH) or identity file 5 OpenStack Networking (neutron), see either: http://docs.openstack.org/icehouse/install-guide/install/apt/content/basics-networking-neutron.html or https://wiki.openstack.org/wiki/Neutron#OpenStack_Networking_.28.22Neutron.22.29
  • 39.
    Chapter 2. Planningthe environment 25 Compute node for KVM for IBM z Systems Topology name of compute node Environment name Fully qualified domain name The root user login information (either password or SSH identity file) IP address Network interface name Deployment of virtual machines Network information, including subnet, IP address for the subnet, IP address of gateway, IP version, DNS server Image source location and image file name Image format (for example QCOW2) Minimum disk and minimum RAM (if needed)
  • 40.
    26 Getting Startedwith KVM for IBM z Systems
  • 41.
    © Copyright IBMCorp. 2015. All rights reserved. 27 Chapter 3. Installing and configuring the environment This chapter provides the step-by-step instructions that were performed to build our KVM environment. It contains three parts: Our configuration Describes our installation goal, together with the resources we used Setting up KVM for IBM z Systems Explains the preparation, installation, and configuration steps Deploying virtual machines Lists the domain definition and the Linux on z Systems installation 3
  • 42.
    28 Getting Startedwith KVM for IBM z Systems 3.1 Our configuration This section describes our target configuration and the components and hardware resources that we use to implement it. 3.1.1 Logical view Figure 3-1 illustrates a logical view of our target configuration. Our goal is to allow virtual machines to connect to two different networks: One for management traffic and the other for user data traffic. This is achieved by creating two separate Open vSwitch bridges. KVM for IBM z Systems is connected directly to the management network. We implemented two KVM for IBM z Systems images with the same logical configuration so that the virtual servers can be migrated between hypervisors as needed. Figure 3-1 Logical configuration 3.1.2 Physical resources Figure 3-2 on page 29 shows our hardware and connectivity setup: One IBM z13 with two LPARs Two OSA cards connected to the management network Two OSA cards connected to a data network Multiple FICON cards for connectivity to storage – SCSI devices – ECKD devices One FTP server One x86 server running IBM Cloud Manager with OpenStack (controller node) Both LPARs have access to all resources. We used one LPAR for installing KVM for IBM z Systems on SCSI devices and the other LPAR for installing KVM for IBM z on ECKD devices. Open vSwitch (vsw-mgmt) Open vSwitch (vsw-data) Management Network Data Network Virtual Machine Virtual Machine Virtual Machine KVM Management
  • 43.
    Chapter 3. Installingand configuring the environment 29 Figure 3-2 Our environment - hardware resources and connectivity 3.1.3 Preparation tasks There are several tasks to perform before the KVM for IBM z installer can be started, which we explain in the subsections that follow: Input/output configuration data set (IOCDS) Storage area network (SAN) FTP server Input/output configuration data set (IOCDS) An IOCDS was prepared to support our environment, as shown in Figure 3-2. We had two logical partitions (A25 and A2F) with different channel types (OSA CHPIDs, FCP CHPIDs, and FICON CHPIDs).
  • 44.
    30 Getting Startedwith KVM for IBM z Systems An IOCDS sample for the LPARs and each channel type is provided in Example 3-1. Example 3-1 Sample IOCDS definitions ****************************************************** **** Sample LPAR and Channel Subsystem ****** ****************************************************** RESOURCE PARTITION=((CSS(0),(A25,5),(A2F,F))) ****************************************************** **** Sample OSA CHPID / CNTLUNIT and IODEVICE ****** ****************************************************** CHPID PATH=(CSS(0),04),SHARED, * PARTITION=((CSS(0),(A25,A2F),(=))), * PCHID=214,TYPE=OSD CNTLUNIT CUNUMBR=2D00, * PATH=((CSS(0),04)), * UNIT=OSA IODEVICE ADDRESS=(2D00,015),CUNUMBR=(2D00),UNIT=OSA IODEVICE ADDRESS=(2D0F,001),UNITADD=FE,CUNUMBR=(2D00), * UNIT=OSAD ****************************************************** **** Sample FCP CHPID / CNTLUNIT and IODEVICE ****** ****************************************************** CHPID PATH=(CSS(0),76),SHARED, * PARTITION=((CSS(0),(A25,A2F),(=))), * PCHID=1B1,TYPE=FCP CNTLUNIT CUNUMBR=B600, * PATH=((CSS(0),76),UNIT=FCP IODEVICE ADDRESS=(B600,032),CUNUMBR=(B600),UNIT=FCP IODEVICE ADDRESS=(B6FC,002),CUNUMBR=(B600),UNIT=FCP ****************************************************** **** Sample FICON CHPID / CNTLUNIT and IODEVICE ****** ****************************************************** CHPID PATH=(CSS(0),48),SHARED, * PARTITION=((CSS(0),(A25,A2F),(=))), * SWITCH=61,PCHID=11D,TYPE=FC CNTLUNIT CUNUMBR=6200, * PATH=((CSS(0),48)),UNITADD=((00,256)), * LINK=((CSS(0),08)),CUADD=2,UNIT=2107 IODEVICE ADDRESS=(6200,042),CUNUMBR=(6200),STADET=Y,UNIT=3390B IODEVICE ADDRESS=(622A,214),CUNUMBR=(6200),STADET=Y,SCHSET=1, * UNIT=3390A For more information about IOCDS, see Stand-Alone Input/Output Configuration Program User’s Guide, IBM System z, SB10-7152: http://www.ibm.com/support/docview.wss?uid=pub1sb10715206
  • 45.
    Chapter 3. Installingand configuring the environment 31 Storage area network (SAN) The SAN configuration usually involves tasks such as cabling, zoning, and LUN masking. We defined 10 LUNs on disk storage and targeted the worldwide port names (WWPNs) of the disk adapters. FTP server We used an FTP server with IP address 192.168.60.15 and FTP user credentials. We created two directories in the FTP directory: KVM and SLES12SP1. In each directory, we created a DVD1 directory to which we mounted the corresponding .iso file. Because the DVD1 directory is mounted as read-only, and because we needed to create various .ins and .prm files, we copied the DVD1/images directory to the main KVM directory and created .ins files in that directory. Then, we created corresponding .prm files in the images/ directory. The resulting structure looks like this: KVM/ – DVD1/ (KVM for IBM z ISO image mounted as read-only)... – images/ • generic.prm • initrd.addrsize • initrd.img • install.img • itso1.prm • itso2.prm • kernel.img • TRANS.TBL • upgrade.img – itso1.ins – itso2.ins SLES12SP1/ – DVD1/ (SLES12SP1 ISO image mounted as read-only) 3.2 Setting up KVM for IBM z Systems This section list the steps needed to install KVM for IBM z, from preparation tasks, through the installation process, to the final configuration for our environment. We describe the following tasks in this section: Preparing the .ins and .prm files Installing KVM for IBM z Configuring KVM for IBM z Note: This section shows the installation and configuration of KVM for IBM z with SCSI devices. There are only subtle changes when installing on ECKD devices, as described in Appendix A, “Installing KVM for IBM z Systems with ECKD devices” on page 95.
  • 46.
    32 Getting Startedwith KVM for IBM z Systems 3.2.1 Preparing the .ins and .prm files As described in “FTP server” on page 31, we had an FTP server to use for installing KVM for IBM z. We created a directory structure that contained the .ins and .prm files needed for the KVM for IBM z installer. Example 3-2 shows the contents of the itso1.ins file, which is a copy of generic.prm file provided in the DVD1 directory. Only the line pointing to itso1.prm was modified. Example 3-2 itso1.ins * for itsokvm1 images/kernel.img 0x00000000 images/initrd.img 0x02000000 images/itso1.prm 0x00010480 images/initrd.addrsize 0x00010408 Example 3-3 shows the itso1.prm file. It defines LUNs for the installer, network properties, and the location of the FTP repository. Example 3-3 itso1.prm ro ramdisk_size=40000 rd.zfcp=0.0.b600,0x500507680120bc24,0x0000000000000000 rd.zfcp=0.0.b600,0x500507680120bc24,0x0001000000000000 rd.zfcp=0.0.b600,0x500507680120bc24,0x0002000000000000 rd.zfcp=0.0.b700,0x500507680120bb91,0x0000000000000000 rd.zfcp=0.0.b700,0x500507680120bb91,0x0001000000000000 rd.zfcp=0.0.b700,0x500507680120bb91,0x0002000000000000 rd.znet=qeth,0.0.2d00,0.0.2d01,0.0.2d02,layer2=1,portno=0,portname=DUMMY ip=192.168.60.70::192.168.60.1:255.255.255.0:itsokvm1:enccw0.0.2d00:none inst.repo=ftp://ftp:ftp@192.168.60.15/KVM/DVD1 Each rd.zfcp statement contains three parameters which, together, define a path to a LUN. The first parameter defines the FCP device on the server side (actually, a device from IOCDS). The second parameter defines the target WWPN, which is a WWPN of disk storage. The third parameter provides a LUN number. This means that the rd.zfcp statements in Example 3-3 define two different paths to each of three LUNs. The rd.znet statement defines which device triplet is used as the NIC for an installer. The ip statement defines the IP properties for the NIC. The inst.repo statement defines the location of the install repositories for KVM for IBM z. In our case, this is the read-only directory of a loop-mounted ISO image.
  • 47.
    Chapter 3. Installingand configuring the environment 33 3.2.2 Installing KVM for IBM z This section describes the steps for installing KVM for IBM z with SCSI devices. Figure 3-3 shows two logical partitions: A25 and A2F. Both partitions are active without a running operating system. Figure 3-3 Two unused logical partitions We installed KVM for IBM z using an FTP server. Figure 3-4 shows how to invoke the Load from Removable Media, or Server panel by selecting a target LPAR, clicking the small arrow icon next to its name, and selecting Recovery and then Load from Removable Media, or Server task. Figure 3-4 Invoke Load from Removable Media, or Server
  • 48.
    34 Getting Startedwith KVM for IBM z Systems Figure 3-5 shows the window in which we provided the IP address of our FTP server, together with FTP credentials. The file location field points to the directory where we put our .ins files as described in 3.1.3, “Preparation tasks” on page 29. Figure 3-5 Load from Removable Media, or Server When the FTP server is contacted, a table listing all of the .ins files displays. We chose the itso1.ins file, as shown in Figure 3-6. This file contains all the necessary information for installing KVM for IBM z on our SCSI devices. Figure 3-6 Select the Software to Install window Load is a disruptive action, which requires a confirmation as shown in Figure 3-7. Figure 3-7 Task confirmation dialog
  • 49.
    Chapter 3. Installingand configuring the environment 35 It takes time to load the installer. To see what was happening on the server, we opened the operating system messages panel. When the installer was ready, it printed a message prompting us to open a Secure Shell (SSH) connection, as shown in Figure 3-8. Notice that all installer panels use the ncurses interface: Figure 3-8 Operating system messages After opening an SSH session, a panel opens (see Figure 3-9 on page 35) from which you can select the language: Use the Tab key to move among fields Use the Enter key and spacebar to press a button You can switch between installer, shell, and the debug panels by using Ctrl-Right or Ctrl-Left arrow keys at any time during the installation. Figure 3-9 Welcome to KVM for IBM z After accepting the International Program License Agreement, IBM and non-IBM Terms and Conditions, and confirming that you want to install KVM for IBM z, the panel for selecting disks for installation displays.
  • 50.
    36 Getting Startedwith KVM for IBM z Systems Figure 3-10 shows the panel that displays the available LUNs. These are the three LUNs we defined in the .prm file in 3.2.1, “Preparing the .ins and .prm files” on page 32. The LUNs are recognized as multipathed devices. From this panel, it is not clear which mpath device represents which LUN. Such information is useful for manual partitioning. Figure 3-10 Devices to install KVM for IBM z to To determine which mpath represents which LUN, we switched to shell using Ctrl-Right Arrow. With the multipath command (see Example 3-4 on page 36) three interesting pieces of information display: mpathe represents LUN 0, mpatha represents LUN 1 and mpathf represents LUN 2. On top of the two paths to each of our three LUNs specified in a parameter file, the installer detected six additional available paths to each LUN. Aside from the three LUNs specified in a parameter file, the installer discovered another seven LUNs available to our LPAR. Example 3-4 multipath output [root@itsokvm1 ~]# multipath -l mpathe (360050768018305e120000000000000ea) dm-4 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 0:0:3:0 sdr 65:16 active undef running | |- 1:0:0:0 sde 8:64 active undef running | |- 1:0:3:0 sdaa 65:160 active undef running | `- 0:0:2:0 sda 8:0 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:4:0 sdaf 65:240 active undef running |- 0:0:5:0 sdap 66:144 active undef running |- 1:0:4:0 sdbi 67:192 active undef running `- 1:0:5:0 sdbs 68:96 active undef running mpathd (360050768018305e120000000000000f0) dm-3 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 0:0:2:6 sdi 8:128 active undef running | |- 1:0:0:6 sdq 65:0 active undef running | |- 0:0:3:6 sdab 65:176 active undef running | `- 1:0:3:6 sdbe 67:128 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:4:6 sdal 66:80 active undef running |- 0:0:5:6 sdav 66:240 active undef running |- 1:0:4:6 sdbo 68:32 active undef running `- 1:0:5:6 sdby 68:192 active undef running
  • 51.
    Chapter 3. Installingand configuring the environment 37 mpathc (360050768018305e120000000000000ed) dm-2 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 0:0:4:3 sdai 66:32 active undef running | |- 0:0:5:3 sdas 66:192 active undef running | |- 1:0:4:3 sdbl 67:240 active undef running | `- 1:0:5:3 sdbv 68:144 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:2:3 sdd 8:48 active undef running |- 1:0:0:3 sdm 8:192 active undef running |- 0:0:3:3 sdx 65:112 active undef running `- 1:0:3:3 sdbb 67:80 active undef running mpathb (360050768018305e120000000000000ee) dm-1 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 0:0:2:4 sdf 8:80 active undef running | |- 1:0:0:4 sdo 8:224 active undef running | |- 0:0:3:4 sdy 65:128 active undef running | `- 1:0:3:4 sdbc 67:96 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:4:4 sdaj 66:48 active undef running |- 0:0:5:4 sdat 66:208 active undef running |- 1:0:4:4 sdbm 68:0 active undef running `- 1:0:5:4 sdbw 68:160 active undef running mpatha (360050768018305e120000000000000eb) dm-0 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 0:0:4:1 sdag 66:0 active undef running | |- 0:0:5:1 sdaq 66:160 active undef running | |- 1:0:4:1 sdbj 67:208 active undef running | `- 1:0:5:1 sdbt 68:112 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:2:1 sdb 8:16 active undef running |- 1:0:0:1 sdg 8:96 active undef running |- 0:0:3:1 sdt 65:48 active undef running `- 1:0:3:1 sdaz 67:48 active undef running mpathj (360050768018305e120000000000000f2) dm-9 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 1:0:0:8 sdu 65:64 active undef running | |- 0:0:3:8 sdad 65:208 active undef running | |- 0:0:2:8 sdl 8:176 active undef running | `- 1:0:3:8 sdbg 67:160 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:4:8 sdan 66:112 active undef running |- 0:0:5:8 sdax 67:16 active undef running |- 1:0:4:8 sdbq 68:64 active undef running `- 1:0:5:8 sdca 68:224 active undef running mpathi (360050768018305e120000000000000f3) dm-8 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 0:0:4:9 sdao 66:128 active undef running | |- 0:0:5:9 sday 67:32 active undef running | |- 1:0:4:9 sdbr 68:80 active undef running | `- 1:0:5:9 sdcb 68:240 active undef running
  • 52.
    38 Getting Startedwith KVM for IBM z Systems `-+- policy='service-time 0' prio=0 status=enabled |- 1:0:0:9 sdw 65:96 active undef running |- 0:0:2:9 sdn 8:208 active undef running |- 0:0:3:9 sdae 65:224 active undef running `- 1:0:3:9 sdbh 67:176 active undef running mpathh (360050768018305e120000000000000f1) dm-7 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 0:0:4:7 sdam 66:96 active undef running | |- 0:0:5:7 sdaw 67:0 active undef running | |- 1:0:4:7 sdbp 68:48 active undef running | `- 1:0:5:7 sdbz 68:208 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:2:7 sdj 8:144 active undef running |- 0:0:3:7 sdac 65:192 active undef running |- 1:0:0:7 sds 65:32 active undef running `- 1:0:3:7 sdbf 67:144 active undef running mpathg (360050768018305e120000000000000ef) dm-6 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 0:0:4:5 sdak 66:64 active undef running | |- 0:0:5:5 sdau 66:224 active undef running | |- 1:0:4:5 sdbn 68:16 active undef running | `- 1:0:5:5 sdbx 68:176 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 1:0:0:5 sdp 8:240 active undef running |- 0:0:2:5 sdh 8:112 active undef running |- 0:0:3:5 sdz 65:144 active undef running `- 1:0:3:5 sdbd 67:112 active undef running mpathf (360050768018305e120000000000000ec) dm-5 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 1:0:0:2 sdk 8:160 active undef running | |- 0:0:2:2 sdc 8:32 active undef running | |- 0:0:3:2 sdv 65:80 active undef running | `- 1:0:3:2 sdba 67:64 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:4:2 sdah 66:16 active undef running |- 0:0:5:2 sdar 66:176 active undef running |- 1:0:4:2 sdbk 67:224 active undef running `- 1:0:5:2 sdbu 68:128 active undef running
  • 53.
    Chapter 3. Installingand configuring the environment 39 Example 3-5 shows the output confirming that only three LUNs are configured for use, as specified in the parameter file, although 10 LUNs were discovered. Example 3-5 lszfcp output [root@itsokvm1 ~]# lszfcp -D 0.0.b600/0x500507680120bc24/0x0000000000000000 0:0:0:0 0.0.b600/0x500507680120bc24/0x0001000000000000 0:0:0:1 0.0.b600/0x500507680120bc24/0x0002000000000000 0:0:0:2 0.0.b600/0x500507680130bc24/0x0000000000000000 0:0:1:0 0.0.b600/0x500507680130bc24/0x0001000000000000 0:0:1:1 0.0.b600/0x500507680130bc24/0x0002000000000000 0:0:1:2 0.0.b600/0x500507680120bb91/0x0000000000000000 0:0:2:0 0.0.b600/0x500507680120bb91/0x0001000000000000 0:0:2:1 0.0.b600/0x500507680120bb91/0x0002000000000000 0:0:2:2 0.0.b600/0x500507680130bb91/0x0000000000000000 0:0:3:0 0.0.b600/0x500507680130bb91/0x0001000000000000 0:0:3:1 0.0.b600/0x500507680130bb91/0x0002000000000000 0:0:3:2 0.0.b700/0x500507680120bc24/0x0000000000000000 1:0:0:0 0.0.b700/0x500507680120bc24/0x0001000000000000 1:0:0:1 0.0.b700/0x500507680120bc24/0x0002000000000000 1:0:0:2 0.0.b700/0x500507680130bc24/0x0000000000000000 1:0:1:0 0.0.b700/0x500507680130bc24/0x0001000000000000 1:0:1:1 0.0.b700/0x500507680130bc24/0x0002000000000000 1:0:1:2 0.0.b700/0x500507680120bb91/0x0000000000000000 1:0:2:0 0.0.b700/0x500507680120bb91/0x0001000000000000 1:0:2:1 0.0.b700/0x500507680120bb91/0x0002000000000000 1:0:2:2 0.0.b700/0x500507680130bb91/0x0000000000000000 1:0:3:0 0.0.b700/0x500507680130bb91/0x0001000000000000 1:0:3:1 0.0.b700/0x500507680130bb91/0x0002000000000000 1:0:3:2 Figure 3-11 shows that we selected all three configured LUNs that KVM for IBM z will be installed on. In this panel, we can define additional devices if needed. Figure 3-11 Selected devices
  • 54.
    40 Getting Startedwith KVM for IBM z Systems Figure 3-12 shows the panel in which we can select automatic or manual partitioning. For our installation, we chose automatic partitioning because we did not have any particular requirements for the system layout. Figure 3-12 Select partition method Figure 3-13 shows the partition summary panel. Figure 3-13 Partition summary panel Next, we chose the time zone as depicted in Figure 3-14. Figure 3-14 Time zone selection
  • 55.
    Chapter 3. Installingand configuring the environment 41 In most installations, it is a required to have a common time source among all components in the IT environment. The IBM z Systems platform uses Server Time Protocol (STP) as its time source provider, so we did not enable NTP servers as shown in Figure 3-15. Figure 3-15 NTP configuration Figure 3-16 shows the panel for network configuration. A NIC named enccw0.0.2d00 was already set online by the installer. This NIC was specified in the parameter file that is described in 3.2.1, “Preparing the .ins and .prm files” on page 32. If no network was specified in the parameter file, or if we needed to configure another card, this panel would have allowed it. We decided to check whether the IP information for the NIC was set as specified in the parameter file. Figure 3-16 Configure network Figure 3-17 shows the configuration of the enccw0.0.2d00 NIC. All of the parameters were correctly read from the parameter file, and no changes were needed. Figure 3-17 Network device configuration
  • 56.
    42 Getting Startedwith KVM for IBM z Systems We did not need to configure another NIC, so we went to the next panel, as shown in Figure 3-18. Figure 3-18 Configure network Figure 3-19 shows the DNS configuration panel. The value in the Hostname field was read from the parameter file. We did not provide any other DNS parameters because they were not needed in our environment. Figure 3-19 DNS configuration Figure 3-20 shows the installation summary. Figure 3-20 Installation summary
  • 57.
    Chapter 3. Installingand configuring the environment 43 If there were existing partitions or volume groups, the panel shown in Figure 3-21 would inform us that they were going to be removed. Figure 3-21 Partitions and LVMs to be removed After pressing Ok, the installation begins. The progress bar shown in Figure 3-22 reports the installation status. Figure 3-22 Installation progress After the installation process is finished, the panel shown in Figure 3-23 opens. After a reboot, KVM for IBM z Systems is ready for use. Figure 3-23 Reboot after installation 3.2.3 Configuring KVM for IBM z This section describes several additional tasks we needed to perform in our environment after KVM for IBM z was installed. “Identifying out IPL device” “Applying maintenance” on page 45 “Defining NICs” on page 46 “Defining Open vSwitches” on page 48 “Adding LUNs” on page 50
  • 58.
    44 Getting Startedwith KVM for IBM z Systems Identifying out IPL device During the installation we used automatic partitioning, and we had no control over which LUN was to be used as the initial program load (IPL) device. Example 3-6 shows that the /boot mount point resides on device 360050768018305e120000000000000ec. Example 3-6 Find /boot device [root@itsokvm1 ~]# mount |grep boot /dev/mapper/360050768018305e120000000000000ec1 on /boot type ext4 (rw,relatime,seclabel,data=ordered) Example 3-7 shows the output from the multipath command. It shows that device 360050768018305e120000000000000ec maps to LUN 2. Example 3-7 multipath output [root@itsokvm1 ~]# multipath -l 360050768018305e120000000000000ec dm-0 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 0:0:0:2 sdd 8:48 active undef running | |- 0:0:1:2 sda 8:0 active undef running | |- 1:0:0:2 sdf 8:80 active undef running | `- 1:0:1:2 sdh 8:112 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:2:2 sdb 8:16 active undef running |- 0:0:3:2 sdc 8:32 active undef running |- 1:0:2:2 sdi 8:128 active undef running `- 1:0:3:2 sdj 8:144 active undef running 360050768018305e120000000000000eb dm-6 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 0:0:2:1 sdr 65:16 active undef running | |- 0:0:3:1 sdt 65:48 active undef running | |- 1:0:2:1 sdv 65:80 active undef running | `- 1:0:3:1 sdx 65:112 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:0:1 sdw 65:96 active undef running |- 0:0:1:1 sdq 65:0 active undef running |- 1:0:0:1 sds 65:32 active undef running `- 1:0:1:1 sdu 65:64 active undef running 360050768018305e120000000000000ea dm-1 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 0:0:0:0 sdn 8:208 active undef running | |- 0:0:1:0 sde 8:64 active undef running | |- 1:0:0:0 sdk 8:160 active undef running | `- 1:0:1:0 sdm 8:192 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:2:0 sdg 8:96 active undef running |- 0:0:3:0 sdl 8:176 active undef running |- 1:0:2:0 sdp 8:240 active undef running `- 1:0:3:0 sdo 8:224 active undef running
  • 59.
    Chapter 3. Installingand configuring the environment 45 Figure 3-24 shows how to IPL KVM for IBM z from the correct LUN when needed. Figure 3-24 Load window Applying maintenance At the time of writing, Fix Pack 1 (FP1) was available from http://www.ibm.com/support/fixcentral/ After downloading the code, we followed the steps provided in the README file, which accompanied FP1. Example 3-8 shows the commands that we executed, as instructed. Example 3-8 Applying fixes [root@itsokvm1 ~]# ll total 152360 -rw-r--r--. 1 root root 156010496 Sep 22 11:11 KVMIBM-1.1.0.1-20150911-s390x.iso -rw-r--r--. 1 root root 3260 Sep 22 11:11 README [root@itsokvm1 ~]# mkdir -p /mnt/FIXPACK [root@itsokvm1 ~]# mount -o ro,loop KVMIBM-1.1.0.1-20150911-s390x.iso /mnt/FIXPAC CK/ [root@itsokvm1 ~]# ls -l /mnt/FIXPACK/ total 41 dr-xr-xr-x. 2 1055 1055 2048 Sep 10 18:00 apar_db -r-xr-xr-x. 1 1055 1055 33836 Sep 10 18:00 ibm_apar.sh -r--r--r--. 1 1055 1055 3266 Sep 10 18:00 README dr-xr-xr-x. 4 1055 1055 2048 Sep 10 18:00 Updates [root@itsokvm1 ~]# cd /mnt/FIXPACK [root@itsokvm1 FIXPACK]# ./ibm_apar.sh -y /mnt/FIXPACK/Updates/ Generating local repository to /mnt/FIXPACK/Updates/ .. fixpack.repo : [FIXPACK] name=IBM FixPack ISO baseurl=file:///mnt/FIXPACK/Updates/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-KVM-FOR-IBM
  • 60.
    46 Getting Startedwith KVM for IBM z Systems Copy fixpack.repo to /etc/yum.repos.d/ ? [y/N]y /tmp//fixpack.repo -> /etc/yum.repos.d/fixpack.repo Installation of REPO FIXPACK successful [root@itsokvm1 FIXPACK]# ./ibm_apar.sh -a Fetching packages from yum... Creating APAR dependency list... Analysing the available APAR against installed rpms APAR | Status | Subject ------------------------------------------------------------- ZZ00466 | NONE | FP1 fix collection (128088) [root@itsokvm1 FIXPACK]# ./ibm_apar.sh -i latest Found latest available APAR: ZZ00466 ... Do you want to continue with installation [y/N]y Clean expirable cache files.. ... Total download size: 147 M Is this ok [y/d/N]: y Downloading packages: ... Complete! Processing done. [root@itsokvm1 FIXPACK]# ./ibm_apar.sh -a Fetching packages from yum... Creating APAR dependency list... Analysing the available APAR against installed rpms APAR | Status | Subject ------------------------------------------------------------- ZZ00466 | APPLIED | FP1 fix collection (128088) [root@itsokvm1 FIXPACK]# reboot Defining NICs As described in 3.1, “Our configuration” on page 28, our environment needed more than one NIC to support two different LANs for virtual servers, each LAN connected through a bonding interface. Our image contains only one NIC, as shown in Example 3-9. It is a NIC that provides access to KVM for IBM z. Example 3-9 Checking configured NICs [root@itsokvm1 ~]# znetconf -c Device IDs Type Card Type CHPID Drv. Name State -------------------------------------------------------------------------------- 0.0.2d00,0.0.2d01,0.0.2d02 1731/01 OSD_1000 04 qeth enccw0.0.2d00 online
  • 61.
    Chapter 3. Installingand configuring the environment 47 Example 3-10 shows a list of unconfigured NICs available to our environment. Example 3-10 Checking available NICs [root@itsokvm1 ~]# znetconf -u Scanning for network devices... Device IDs Type Card Type CHPID Drv. ------------------------------------------------------------ 0.0.2d03,0.0.2d04,0.0.2d05 1731/01 OSA (QDIO) 04 qeth 0.0.2d06,0.0.2d07,0.0.2d08 1731/01 OSA (QDIO) 04 qeth 0.0.2d09,0.0.2d0a,0.0.2d0b 1731/01 OSA (QDIO) 04 qeth 0.0.2d0c,0.0.2d0d,0.0.2d0e 1731/01 OSA (QDIO) 04 qeth 0.0.2d20,0.0.2d21,0.0.2d22 1731/01 OSA (QDIO) 05 qeth 0.0.2d23,0.0.2d24,0.0.2d25 1731/01 OSA (QDIO) 05 qeth 0.0.2d26,0.0.2d27,0.0.2d28 1731/01 OSA (QDIO) 05 qeth 0.0.2d29,0.0.2d2a,0.0.2d2b 1731/01 OSA (QDIO) 05 qeth 0.0.2d2c,0.0.2d2d,0.0.2d2e 1731/01 OSA (QDIO) 05 qeth 0.0.2d40,0.0.2d41,0.0.2d42 1731/01 OSA (QDIO) 06 qeth 0.0.2d43,0.0.2d44,0.0.2d45 1731/01 OSA (QDIO) 06 qeth 0.0.2d46,0.0.2d47,0.0.2d48 1731/01 OSA (QDIO) 06 qeth 0.0.2d49,0.0.2d4a,0.0.2d4b 1731/01 OSA (QDIO) 06 qeth 0.0.2d4c,0.0.2d4d,0.0.2d4e 1731/01 OSA (QDIO) 06 qeth 0.0.2d60,0.0.2d61,0.0.2d62 1731/01 OSA (QDIO) 07 qeth 0.0.2d63,0.0.2d64,0.0.2d65 1731/01 OSA (QDIO) 07 qeth 0.0.2d66,0.0.2d67,0.0.2d68 1731/01 OSA (QDIO) 07 qeth 0.0.2d69,0.0.2d6a,0.0.2d6b 1731/01 OSA (QDIO) 07 qeth As shown in Figure 3-2 on page 29, we chose to use devices 2d03, 2d23, 2d43, and 2d63 to connect our Open vSwitch bridges to the LAN. The devices need to be configured as Layer 2 devices, and they need to be able to provide bridging functions. We configured them with the required parameters and confirmed that the needed devices were online, as shown in Example 3-11. Example 3-11 Configuring NICs online [root@itsokvm1 ~]# znetconf -a 2d03 -o layer2=1 -o bridge_role=primary Scanning for network devices... Successfully configured device 0.0.2d03 (enccw0.0.2d03) [root@itsokvm1 ~]# znetconf -a 2d23 -o layer2=1 -o bridge_role=primary Scanning for network devices... Successfully configured device 0.0.2d23 (enccw0.0.2d23) [root@itsokvm1 ~]# znetconf -a 2d43 -o layer2=1 -o bridge_role=primary Scanning for network devices... Successfully configured device 0.0.2d43 (enccw0.0.2d43) [root@itsokvm1 ~]# znetconf -a 2d63 -o layer2=1 -o bridge_role=primary Scanning for network devices... Successfully configured device 0.0.2d63 (enccw0.0.2d63) [root@itsokvm1 ~]# znetconf -c Device IDs Type Card Type CHPID Drv. Name State -------------------------------------------------------------------------------- 0.0.2d00,0.0.2d01,0.0.2d02 1731/01 OSD_1000 04 qeth enccw0.0.2d00 online 0.0.2d03,0.0.2d04,0.0.2d05 1731/01 OSD_1000 04 qeth enccw0.0.2d03 online
  • 62.
    48 Getting Startedwith KVM for IBM z Systems 0.0.2d23,0.0.2d24,0.0.2d25 1731/01 OSD_1000 05 qeth enccw0.0.2d23 online 0.0.2d43,0.0.2d44,0.0.2d45 1731/01 OSD_1000 06 qeth enccw0.0.2d43 online 0.0.2d63,0.0.2d64,0.0.2d65 1731/01 OSD_1000 07 qeth enccw0.0.2d63 online Example 3-12shows a test of bridging capabilities of the newly configured NICs. Example 3-12 Check bridging capabilities [root@itsokvm1 ~]# cat /sys/class/net/enccw0.0.2d03/device/bridge_state active [root@itsokvm1 ~]# cat /sys/class/net/enccw0.0.2d23/device/bridge_state active [root@itsokvm1 ~]# cat /sys/class/net/enccw0.0.2d43/device/bridge_state active [root@itsokvm1 ~]# cat /sys/class/net/enccw0.0.2d63/device/bridge_state active We brought the NICs up online dynamically. These changes will not be persistent at system restart. To make changes persistent, there must be corresponding ifcfg-enccw0.0.2dx3 files in /etc/sysconfig/network-scripts directory. An example of such a file is shown in Example 3-13. There must be a corresponding file created for each NIC, or four files in our case. Example 3-13 Make changes permanent [root@itsokvm1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-enccw0.0.2d03 TYPE=Ethernet BOOTPROTO=none NAME=enccw0.0.2d03 DEVICE=enccw0.0.2d03 ONBOOT=yes NETTYPE=qeth SUBCHANNELS="0.0.2d03,0.0.2d04,0.0.2d05" OPTIONS="layer2=1 bridge_reflect_promisc=primary buffer_count=128" Defining Open vSwitches As described in 3.1, “Our configuration” on page 28, we needed to create two Open vSwitches (which shows as OVS in our examples). For KVM for IBM z to handle OVS, the openvswitch service must be running. This service is not enabled by default. Example 3-14 shows the commands to check whether service is running, enable the service to be started after a system restart, start the service dynamically, and check the status after the service is started. Example 3-14 openswitch service [root@itsokvm1 ~]# ovs-vsctl show ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) [root@itsokvm1 ~]# systemctl status openvswitch openvswitch.service - Open vSwitch Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; disabled) Active: inactive (dead)
  • 63.
    Chapter 3. Installingand configuring the environment 49 [root@itsokvm1 ~]# systemctl enable openvswitch ln -s '/usr/lib/systemd/system/openvswitch.service' '/etc/systemd/system/multi-user.target.wants/openvswitch.service' [root@itsokvm1 ~]# systemctl start openvswitch [root@itsokvm1 ~]# systemctl status openvswitch openvswitch.service - Open vSwitch Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; enabled) Active: active (exited) since Wed 2015-09-23 09:00:14 EDT; 3s ago Process: 5366 ExecStart=/bin/true (code=exited, status=0/SUCCESS) Main PID: 5366 (code=exited, status=0/SUCCESS) Sep 23 09:00:14 itsokvm2 systemd[1]: Starting Open vSwitch... Sep 23 09:00:14 itsokvm2 systemd[1]: Started Open vSwitch. [root@itsokvm1 ~]# ovs-vsctl show bcd5c59b-b1fd-4f95-8f66-926c1ffdc227 ovs_version: "2.3.0" We created two OVS bridges and added bonding interfaces consisting of two NICs to connect each bridge to the LAN, as shown in Example 3-15. Example 3-15 Create bridge and bond port [root@itsokvm1 ~]# ovs-vsctl add-br vsw_mgmt [root@itsokvm1 ~]# ovs-vsctl add-br vsw_data [root@itsokvm1 ~]# ovs-vsctl add-bond vsw_mgmt bond0 enccw0.0.2d03 enccw0.0.2d43 [root@itsokvm1 ~]# ovs-vsctl add-bond vsw_data bond1 enccw0.0.2d23 enccw0.0.2d63 Example 3-16 shows the defined switches and their interfaces. Example 3-16 Defined bridges [root@itsokvm1 ~]# ovs-vsctl show e7d10201-8a83-42db-a8c9-96aa7a9bb17c Bridge vsw_mgmt Port vsw_mgmt Interface vsw_mgmt type: internal Port "bond0" Interface "enccw0.0.2d43" Interface "enccw0.0.2d03" Bridge vsw_data Port vsw_data Interface vsw_data type: internal Port "bond1" Interface "enccw0.0.2d63" Interface "enccw0.0.2d23" ovs_version: "2.3.0"
  • 64.
    50 Getting Startedwith KVM for IBM z Systems Adding LUNs We decided to add two more LUNs to our environment to have more space available for qcow2 files. We added those two LUNs to the root volume group and extended the root file system dynamically. To make LUNs available to the system, we performed the steps outlined in Example 3-17. The first multipath command output shows original setup where three LUNs were available. Next, we added paths to two more LUNs into the /etc/zfcp.conf file. Then, we ran zfcpconf.sh which reads the /etc/zfcp.conf file and makes devices from the file available to the system. This is followed by another multipath command, which shows that the two new LUNs became available. Example 3-17 Adding LUNs [root@itsokvm1 ~]# multipath -l 360050768018305e120000000000000ec dm-0 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=50 status=active | |- 0:0:0:2 sdo 8:224 active ready running | |- 0:0:2:2 sdu 65:64 active ready running | |- 1:0:2:2 sda 8:0 active ready running | `- 1:0:3:2 sdb 8:16 active ready running `-+- policy='service-time 0' prio=10 status=enabled |- 0:0:1:2 sdr 65:16 active ready running |- 0:0:3:2 sdx 65:112 active ready running |- 1:0:4:2 sdc 8:32 active ready running `- 1:0:5:2 sdd 8:48 active ready running 360050768018305e120000000000000eb dm-1 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=50 status=active | |- 0:0:1:1 sdq 65:0 active ready running | |- 0:0:3:1 sdw 65:96 active ready running | |- 1:0:4:1 sdg 8:96 active ready running | `- 1:0:5:1 sdh 8:112 active ready running `-+- policy='service-time 0' prio=10 status=enabled |- 0:0:0:1 sdn 8:208 active ready running |- 0:0:2:1 sdt 65:48 active ready running |- 1:0:2:1 sde 8:64 active ready running `- 1:0:3:1 sdf 8:80 active ready running 360050768018305e120000000000000ea dm-5 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=50 status=active | |- 0:0:0:0 sdm 8:192 active ready running | |- 0:0:2:0 sds 65:32 active ready running | |- 1:0:2:0 sdi 8:128 active ready running | `- 1:0:3:0 sdj 8:144 active ready running `-+- policy='service-time 0' prio=10 status=enabled |- 0:0:1:0 sdp 8:240 active ready running |- 0:0:3:0 sdv 65:80 active ready running |- 1:0:4:0 sdk 8:160 active ready running `- 1:0:5:0 sdl 8:176 active ready running [root@itsokvm1 ~]# vi /etc/zfcp.conf 0.0.b600 0x500507680130bc24 0x0002000000000000 0.0.b600 0x500507680120bb91 0x0002000000000000 0.0.b700 0x500507680120bc24 0x0002000000000000 0.0.b600 0x500507680130bb91 0x0002000000000000
  • 65.
    Chapter 3. Installingand configuring the environment 51 0.0.b700 0x500507680130bc24 0x0002000000000000 0.0.b600 0x500507680120bc24 0x0002000000000000 0.0.b700 0x500507680120bb91 0x0002000000000000 0.0.b700 0x500507680130bb91 0x0002000000000000 0.0.b600 0x500507680130bc24 0x0000000000000000 0.0.b600 0x500507680120bb91 0x0000000000000000 0.0.b700 0x500507680120bc24 0x0000000000000000 0.0.b600 0x500507680130bb91 0x0000000000000000 0.0.b700 0x500507680130bc24 0x0000000000000000 0.0.b600 0x500507680120bc24 0x0000000000000000 0.0.b700 0x500507680130bb91 0x0000000000000000 0.0.b700 0x500507680120bb91 0x0000000000000000 0.0.b600 0x500507680130bc24 0x0001000000000000 0.0.b600 0x500507680120bb91 0x0001000000000000 0.0.b700 0x500507680120bc24 0x0001000000000000 0.0.b600 0x500507680130bb91 0x0001000000000000 0.0.b700 0x500507680130bc24 0x0001000000000000 0.0.b700 0x500507680120bb91 0x0001000000000000 0.0.b600 0x500507680120bc24 0x0001000000000000 0.0.b700 0x500507680130bb91 0x0001000000000000 0.0.b600 0x500507680130bc24 0x0003000000000000 0.0.b600 0x500507680120bb91 0x0003000000000000 0.0.b700 0x500507680120bc24 0x0003000000000000 0.0.b600 0x500507680130bb91 0x0003000000000000 0.0.b700 0x500507680130bc24 0x0003000000000000 0.0.b700 0x500507680120bb91 0x0003000000000000 0.0.b600 0x500507680120bc24 0x0003000000000000 0.0.b700 0x500507680130bb91 0x0003000000000000 0.0.b600 0x500507680130bc24 0x0004000000000000 0.0.b600 0x500507680120bb91 0x0004000000000000 0.0.b700 0x500507680120bc24 0x0004000000000000 0.0.b600 0x500507680130bb91 0x0004000000000000 0.0.b700 0x500507680130bc24 0x0004000000000000 0.0.b700 0x500507680120bb91 0x0004000000000000 0.0.b600 0x500507680120bc24 0x0004000000000000 0.0.b700 0x500507680130bb91 0x0004000000000000 [root@itsokvm1 ~]# zfcpconf.sh [root@itsokvm1 ~]# multipath -l 360050768018305e120000000000000ee dm-11 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 0:0:2:4 sdag 66:0 active undef running | |- 1:0:2:4 sdah 66:16 active undef running | |- 0:0:0:4 sdai 66:32 active undef running | `- 1:0:3:4 sdaj 66:48 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 1:0:4:4 sdal 66:80 active undef running |- 0:0:1:4 sdak 66:64 active undef running |- 0:0:3:4 sdan 66:112 active undef running `- 1:0:5:4 sdam 66:96 active undef running 360050768018305e120000000000000ed dm-9 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 0:0:1:3 sdac 65:192 active undef running | |- 1:0:5:3 sdae 65:224 active undef running
  • 66.
    52 Getting Startedwith KVM for IBM z Systems | |- 1:0:4:3 sdad 65:208 active undef running | `- 0:0:3:3 sdaf 65:240 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:2:3 sdy 65:128 active undef running |- 0:0:0:3 sdaa 65:160 active undef running |- 1:0:3:3 sdab 65:176 active undef running `- 1:0:2:3 sdz 65:144 active undef running 360050768018305e120000000000000ec dm-0 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 0:0:0:2 sdo 8:224 active undef running | |- 0:0:2:2 sdu 65:64 active undef running | |- 1:0:2:2 sda 8:0 active undef running | `- 1:0:3:2 sdb 8:16 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:1:2 sdr 65:16 active undef running |- 0:0:3:2 sdx 65:112 active undef running |- 1:0:4:2 sdc 8:32 active undef running `- 1:0:5:2 sdd 8:48 active undef running 360050768018305e120000000000000eb dm-1 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 0:0:1:1 sdq 65:0 active undef running | |- 0:0:3:1 sdw 65:96 active undef running | |- 1:0:4:1 sdg 8:96 active undef running | `- 1:0:5:1 sdh 8:112 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:0:1 sdn 8:208 active undef running |- 0:0:2:1 sdt 65:48 active undef running |- 1:0:2:1 sde 8:64 active undef running `- 1:0:3:1 sdf 8:80 active undef running 360050768018305e120000000000000ea dm-5 IBM ,2145 size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 0:0:0:0 sdm 8:192 active undef running | |- 0:0:2:0 sds 65:32 active undef running | |- 1:0:2:0 sdi 8:128 active undef running | `- 1:0:3:0 sdj 8:144 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:1:0 sdp 8:240 active undef running |- 0:0:3:0 sdv 65:80 active undef running |- 1:0:4:0 sdk 8:160 active undef running `- 1:0:5:0 sdl 8:176 active undef running
  • 67.
    Chapter 3. Installingand configuring the environment 53 The next step is to create partitions on the new LUNs, as shown in Example 3-18. Example 3-18 Creating partitions [root@itsokvm1 ~]# fdisk /dev/disk/by-id/dm-uuid-mpath-360050768018305e120000000000000ed Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): First sector (2048-20971519, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): Using default value 20971519 Partition 1 of type Linux and of size 10 GiB is set Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 22: Invalid argument. The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) Syncing disks. [root@itsokvm1 ~]# fdisk /dev/disk/by-id/dm-uuid-mpath-360050768018305e120000000000000ee Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): First sector (2048-20971519, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): Using default value 20971519 Partition 1 of type Linux and of size 10 GiB is set Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table.
  • 68.
    54 Getting Startedwith KVM for IBM z Systems WARNING: Re-reading the partition table failed with error 22: Invalid argument. The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) Syncing disks. The partprobe command forces the kernel to reread partitioning information. The ls command, executed afterward, shows that the new partitions are available to the system. This is shown in Example 3-19. Example 3-19 Refresh partitioning information [root@itsokvm1 ~]# partprobe device-mapper: remove ioctl on 360050768018305e120000000000000eb1 failed: Device or resource busy Warning: parted was unable to re-read the partition table on /dev/mapper/360050768018305e120000000000000eb (Device or resource busy). This means Linux won't know anything about the modifications you made. device-mapper: create ioctl on 360050768018305e120000000000000eb1 failed: Device or resource busy device-mapper: remove ioctl on 360050768018305e120000000000000eb1 failed: Device or resource busy device-mapper: remove ioctl on 360050768018305e120000000000000ea1 failed: Device or resource busy Warning: parted was unable to re-read the partition table on /dev/mapper/360050768018305e120000000000000ea (Device or resource busy). This means Linux won't know anything about the modifications you made. device-mapper: create ioctl on 360050768018305e120000000000000ea1 failed: Device or resource busy device-mapper: remove ioctl on 360050768018305e120000000000000ea1 failed: Device or resource busy device-mapper: remove ioctl on 360050768018305e120000000000000ec3 failed: Device or resource busy device-mapper: remove ioctl on 360050768018305e120000000000000ec2 failed: Device or resource busy device-mapper: remove ioctl on 360050768018305e120000000000000ec1 failed: Device or resource busy Warning: parted was unable to re-read the partition table on /dev/mapper/360050768018305e120000000000000ec (Device or resource busy). This means Linux won't know anything about the modifications you made. device-mapper: create ioctl on 360050768018305e120000000000000ec1 failed: Device or resource busy device-mapper: remove ioctl on 360050768018305e120000000000000ec1 failed: Device or resource busy device-mapper: create ioctl on 360050768018305e120000000000000ec2 failed: Device or resource busy device-mapper: remove ioctl on 360050768018305e120000000000000ec2 failed: Device or resource busy device-mapper: create ioctl on 360050768018305e120000000000000ec3 failed: Device or resource busy device-mapper: remove ioctl on 360050768018305e120000000000000ec3 failed: Device or resource busy [root@itsokvm1 ~]# ls -l /dev/mapper/ total 0 lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000ea -> ../dm-6
  • 69.
    Chapter 3. Installingand configuring the environment 55 lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000ea1 -> ../dm-8 lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000eb -> ../dm-1 lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000eb1 -> ../dm-5 lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000ec -> ../dm-0 lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000ec1 -> ../dm-2 lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000ec2 -> ../dm-3 lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000ec3 -> ../dm-4 lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000ed -> ../dm-7 lrwxrwxrwx. 1 root root 8 Sep 24 14:17 360050768018305e120000000000000ed1 -> ../dm-11 lrwxrwxrwx. 1 root root 7 Sep 24 14:17 360050768018305e120000000000000ee -> ../dm-9 lrwxrwxrwx. 1 root root 8 Sep 24 14:17 360050768018305e120000000000000ee1 -> ../dm-12 crw-------. 1 root root 10, 236 Sep 24 12:39 control lrwxrwxrwx. 1 root root 8 Sep 24 14:17 zkvm1-root -> ../dm-10 These new partitions will be added to the root volume group (VG) later. However, for the loader to be able to bring root VG up correctly, it needs to be aware of all of the LUNs that form root VG. To achieve this, initramfs must be created and zipl updated, as shown in Example 3-20. There is no need to modify the zipl.conf file, but zfcp.conf must contain all relevant LUN information, as this file is read by dracut command. Example 3-20 Modify initial ramdisk [root@itsokvm1 ~]# dracut -f [root@itsokvm1 ~]# zipl Using config file '/etc/zipl.conf' Run /lib/s390-tools/zipl_helper.device-mapper /boot Building bootmap in '/boot' Building menu 'zipl-automatic-menu' Adding #1: IPL section '3.10.0-123.20.1.el7_0.kvmibm.15.s390x' (default) Adding #2: IPL section 'linux' Preparing boot device: dm-0. Done. Note: It is important to execute these two commands. Otherwise, the system will not come up after reboot.
  • 70.
    56 Getting Startedwith KVM for IBM z Systems Example 3-21 shows the commands we executed to create physical volumes on new partitions. Then the physical volumes were added to a volume group, the logical volume was expanded, and the root file system was resized. Example 3-21 Creating physical volumes [root@itsokvm1 ~]# pvcreate /dev/mapper/360050768018305e120000000000000ed1 Physical volume "/dev/mapper/360050768018305e120000000000000ed1" successfully created [root@itsokvm1 ~]# pvcreate /dev/mapper/360050768018305e120000000000000ee1 Physical volume "/dev/mapper/360050768018305e120000000000000ee1" successfully created [root@itsokvm1 ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/360050768018305e120000000000000ea1 zkvm lvm2 a-- 10.00g 0 /dev/mapper/360050768018305e120000000000000eb1 zkvm lvm2 a-- 10.00g 0 /dev/mapper/360050768018305e120000000000000ec3 zkvm lvm2 a-- 5.50g 0 /dev/mapper/360050768018305e120000000000000ed1 lvm2 a-- 10.00g 10.00g /dev/mapper/360050768018305e120000000000000ee1 lvm2 a-- 10.00g 10.00g Example 3-22 shows how to add physical volumes to the volume group. It shows volume group information before and after the volume was extended, in addition to physical volume information after the new physical volumes were added to the volume group. Example 3-22 Adding physical volumes to the volume group [root@itsokvm1 ~]# vgs VG #PV #LV #SN Attr VSize VFree zkvm 3 1 0 wz--n- 25.49g 0 [root@itsokvm1 ~]# vgextend zkvm /dev/mapper/360050768018305e120000000000000ed1 /dev/mapper/360050768018305e120000000000000ee1 Volume group "zkvm" successfully extended [root@itsokvm1 ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/360050768018305e120000000000000ea1 zkvm lvm2 a-- 10.00g 0 /dev/mapper/360050768018305e120000000000000eb1 zkvm lvm2 a-- 10.00g 0 /dev/mapper/360050768018305e120000000000000ec3 zkvm lvm2 a-- 5.50g 0 /dev/mapper/360050768018305e120000000000000ed1 zkvm lvm2 a-- 10.00g 10.00g /dev/mapper/360050768018305e120000000000000ee1 zkvm lvm2 a-- 10.00g 10.00g [root@itsokvm1 ~]# vgs VG #PV #LV #SN Attr VSize VFree zkvm 5 1 0 wz--n- 45.48g 19.99g
  • 71.
    Chapter 3. Installingand configuring the environment 57 Example 3-23 shows the lvextend command together with logical volume information before and after running the lvextend command. Example 3-23 Extending a logical volume and resizing the file system [root@itsokvm1 ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert root zkvm -wi-ao---- 25.49g [root@itsokvm1 ~]# lvextend /dev/mapper/zkvm-root -L +19G Extending logical volume root to 44.49 GiB Logical volume root successfully resized [root@itsokvm1 ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert root zkvm -wi-ao---- 44.49g Example 3-24 shows resizing of the root file system. It also shows the output of the df command before and after resizing. Example 3-24 Resizing the root file system [root@itsokvm1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/zkvm-root 25G 3.9G 20G 17% / devtmpfs 32G 0 32G 0% /dev tmpfs 32G 0 32G 0% /dev/shm tmpfs 32G 8.5M 32G 1% /run tmpfs 32G 0 32G 0% /sys/fs/cgroup /dev/mapper/360050768018305e120000000000000ec1 488M 80M 373M 18% /boot [root@itsokvm1 ~]# resize2fs /dev/mapper/zkvm-root resize2fs 1.42.9 (28-Dec-2013) Filesystem at /dev/mapper/zkvm-root is mounted on /; on-line resizing required old_desc_blocks = 4, new_desc_blocks = 6 The filesystem on /dev/mapper/zkvm-root is now 11662336 blocks long. [root@itsokvm1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/zkvm1-root 44G 3.9G 38G 10% / devtmpfs 32G 0 32G 0% /dev tmpfs 32G 0 32G 0% /dev/shm tmpfs 32G 8.5M 32G 1% /run tmpfs 32G 0 32G 0% /sys/fs/cgroup /dev/mapper/360050768018305e120000000000000ec1 488M 80M 373M 18% /boot Additional space provided by new LUNs is now available to KVM for IBM z for use.
  • 72.
    58 Getting Startedwith KVM for IBM z Systems 3.3 Deploying virtual machines This section describes the steps we performed in KVM for IBM z for defining a domain and installing a Linux on z Systems virtual machine into that domain. The following tasks are described in this section: 3.3.1, “Preparing the environment” on page 58 3.3.2, “Installing Linux on z Systems” on page 61 3.3.3, “Modifying domain definitions” on page 61 3.3.4, “Linux on z Systems configuration” on page 63 3.3.1 Preparing the environment Example 3-25 shows that a 5 GB qcow2 file is being created, which is provided as a virtual disk to the virtual machine. Example 3-25 qcow2 disk root@itsokvm1 ~]# cd /var/lib/libvirt/images/ root@itsokvm1 images]# qemu-img create -f qcow2 linux80.img 5G Formatting 'linux80.img', fmt=qcow2 size=5368709120 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16 The initial ramdisk and kernel files are needed for Linux on z Systems installation. We obtained them from the installation DVD on the FTP server and renamed them to suit this scenario, as depicted in Example 3-26. Example 3-26 Obtaining files [root@itsokvm1 images]# curl ftp://ftp:ftp@192.168.60.15/SLES12SP1/DVD1/boot/s390x/cd.ikr > s12-kernel.boot % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 45.3M 100 45.3M 0 0 22.8M 0 0:00:01 0:00:01 --:--:-- 22.8M [root@itsokvm1 images]# curl ftp://ftp:ftp@192.168.60.15/SLES12SP1/DVD1/boot/s390x/initrd > s12-initrd.boot % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29.3M 100 29.3M 0 0 16.4M 0 0:00:01 0:00:01 --:--:-- 16.4M Lastly, we created domain definition files in .xml format. We found it convenient to create two files for a domain: One for installation purposes and one for regular use after installation.
  • 73.
    Chapter 3. Installingand configuring the environment 59 Example 3-27 shows the linux80.xml.install file, which contains definitions for booting the installation files. Example 3-27 linux80.xml.install <domain type='kvm'> <name>linux80</name> <description>Guest-System Suse Sles12</description> <memory>524288</memory> <vcpu>1</vcpu> <cputune> </cputune> <os> <type arch='s390x' machine='s390-ccw-virtio'>hvm</type> <!-- Boot kernel . remove 3 lines after successful installation --> <kernel> /var/lib/libvirt/images/s12-kernel.boot</kernel> <initrd>/var/lib/libvirt/images/s12-initrd.boot</initrd> <cmdline>HostIP=192.168.60.80/24 Hostname=linux80.itso.ibm.com Gateway=192.168.60.1 Layer2=1 Install=ftp://ftp:ftp@192.168.60.15/SLES12SP1/DVD1/ UseVNC=1 VNCPassword=12345678 InstNetDev=virtio Manual=0 </cmdline> <boot dev='hd'/> </os> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>preserve</on_crash> <devices> <emulator>/usr/bin/qemu-system-s390x</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/linux80.img'/> <target dev='vda' bus='virtio'/> <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0002'/> </disk> <interface type="bridge"> <source bridge="vsw_mgmt"/> <virtualport type="openvswitch"/> <model type="virtio"/> </interface> <console type='pty'> <target type='sclp' port='0'/> </console> </devices> </domain>
  • 74.
    60 Getting Startedwith KVM for IBM z Systems Example 3-28 shows a definition of the linux80.xml file. The kernel, initrd, and cmdline statements were removed. One more network interface was defined for the vsw-data OVS bridge. Example 3-28 linux80.xml <domain type='kvm'> <name>linux80</name> <description>Guest-System Suse Sles12</description> <memory>524288</memory> <vcpu>1</vcpu> <cputune> </cputune> <os> <type arch='s390x' machine='s390-ccw-virtio'>hvm</type> <boot dev='hd'/> </os> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>preserve</on_crash> <devices> <emulator>/usr/bin/qemu-system-s390x</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/linux80.img'/> <target dev='vda' bus='virtio'/> <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0002'/> </disk> <interface type="bridge"> <source bridge="vsw_mgmt"/> <virtualport type="openvswitch"/> <model type="virtio"/> </interface> <interface type="bridge"> <source bridge="vsw_data"/> <virtualport type="openvswitch"/> <model type="virtio"/> </interface> <console type='pty'> <target type='sclp' port='0'/> </console> </devices> </domain>
  • 75.
    Chapter 3. Installingand configuring the environment 61 3.3.2 Installing Linux on z Systems Example 3-29 shows how we defined and started the linux80 domain. Because its .xml file points to installation initial RAM disk and kernel, it starts the installation of Linux on z Systems. Example 3-29 Defining and starting Linux on z Systems installation [root@itsokvm1 images]# virsh define linux80.xml.install Domain linux80 defined from linux80.xml.install [root@itsokvm1 ~]# virsh start linux80 --console Domain linux80 started Connected to domain linux80 ... starting VNC server... A log file will be written to: /var/log/YaST2/vncserver.log ... *** *** You can connect to <host>, display :1 now with vncviewer *** Or use a Java capable browser on http://<host>:5801/ *** (When YaST2 is finished, close your VNC viewer and return to this window.) Active interfaces: eth0 Link encap:Ethernet HWaddr 52:54:00:A4:E3:B5 inet addr:192.168.60.80 Bcast:192.168.60.255 Mask:255.255.255.0 -- lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 *** Starting YaST2 *** Linux on z Systems can be installed using the virtual network computing (VNC) interface. Installing Linux on z Systems in a KVM for IBM z environment is no different than any other Linux on z Systems installation. For more details, including installation panel captures, see The Virtualization Cookbook for IBM z Systems Volume 3: SUSE Linux Enterprise Server 12, SG24-8890: http://www.redbooks.ibm.com/abstracts/sg248890.html?Open 3.3.3 Modifying domain definitions After Linux on z Systems is installed, it is automatically rebooted. Because the domain definition still specifies the same initial RAM disk and kernel as a boot device, the installation process will be started again from the beginning. To get out of the console to execute virsh commands, press Ctrl + ] (right bracket) to return to the shell.
  • 76.
    62 Getting Startedwith KVM for IBM z Systems Example 3-30 shows the commands we executed to redefine the linux80 domain: The destroy command shuts down the virtual machine. The undefine command removes the domain definition from KVM for IBM z. Linux on z Systems was installed in the qcow2 file and can be used in the new domain definition. The define command defines the linux80 domain again, this time from an .xml file that defines the virtual hard disk as a boot device. The edit command allows you to make changes to an existing virtual machine configuration file. A text editor will open with the contents of the given .xml file. Example 3-30 Redefine domain [root@itsokvm1 images]# virsh destroy linux80 Domain linux80 destroyed [root@itsokvm1 images]# virsh undefine linux80 Domain linux80 has been undefined [root@itsokvm1 images]# virsh define linux80.xml Domain linux80 defined from linux80.xml After the domain is redefined, restart it again. Now, the previously installed Linux on z Systems server has been brought up from the virtual disk, as shown in Example 3-31. Example 3-31 Start virtual machine [root@itsokvm1 images]# virsh start linux80 --console Domain linux80 started Connected to domain linux80 ... +----------------------------------------------------------------------------+ |*SLES12-SP1 | | Advanced options for SLES12-SP1 | | | | | | | | | | | | | | | | | | | | | +----------------------------------------------------------------------------+ ... Welcome to SUSE Linux Enterprise Server 12 SP1 Beta3 (s390x) - Kernel 3.12.47-2-default (ttysclp0). linux80 login:
  • 77.
    Chapter 3. Installingand configuring the environment 63 3.3.4 Linux on z Systems configuration As described in 3.1.1, “Logical view” on page 28, our virtual servers need access to two LANs. This is specific to each environment. During Linux on z Systems installation, one NIC was configured which connects a virtual server to vsw_mgmt Open vSwitch. In 3.3.3, “Modifying domain definitions” on page 61, we added another NIC which connects a virtual server to the vsw_data network, but this NIC is not configured in Linux on z Systems. We used YaST Control Center to configure the second NIC. When we exited YaST, we saw that both network interfaces were configured, as shown in Example 3-32. Example 3-32 Two NICs configured linux80:~ # ifconfig eth0 Link encap:Ethernet HWaddr 52:54:00:59:B3:CE inet addr:192.168.60.80 Bcast:192.168.60.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fe59:b3ce/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:59934 errors:0 dropped:20 overruns:0 frame:0 TX packets:50302 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:6322856 (6.0 Mb) TX bytes:115456986 (110.1 Mb) eth1 Link encap:Ethernet HWaddr 52:54:00:04:A9:80 inet addr:172.16.60.80 Bcast:172.16.60.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fe04:a980/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:759 errors:0 dropped:0 overruns:0 frame:0 TX packets:11 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:188951 (184.5 Kb) TX bytes:746 (746.0 b) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:13169 errors:0 dropped:0 overruns:0 frame:0 TX packets:13169 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:111754696 (106.5 Mb) TX bytes:111754696 (106.5 Mb)
  • 78.
    64 Getting Startedwith KVM for IBM z Systems
  • 79.
    © Copyright IBMCorp. 2015. All rights reserved. 65 Chapter 4. Managing and monitoring the environment In this chapter, we discuss various tools for managing KVM-based guest operating systems. Specifically, we discuss the virsh and Nagios tools from the management and monitoring perspectives, respectively. We also describe how virsh can be used for managing the guest from a command-line interface (CLI). The following topics are covered in this chapter: KVM on IBM z System management interfaces Using virsh Monitoring KVM for IBM z Systems 4
  • 80.
    66 Getting Startedwith KVM for IBM z Systems 4.1 KVM on IBM z System management interfaces Although KVM provides a simple mechanism for sharing resources by isolating the application’s software environment, most of the guest virtual machines incur some kind of virtualization overhead. This overhead varies depending on the type of application, the type of virtualization, and the virtual machine (VM) monitor used. For I/O applications particularly, the overhead of CPU used by KVM on behalf of the VMs is considerable and affects the performance characteristics of applications. This makes it necessary to have tools for managing and monitoring the resources used by KVM for IBM z Systems. Figure 4-1 provides a high-level view of various interfaces and tools that are available for virtual server management. Figure 4-1 Management and monitoring interfaces 4.1.1 Introduction to the libvirt management stack Libvirt, the virtualization application programming interface (API), provides a common layer of abstraction and control for virtual machines that are deployed within many different hypervisors, including KVM. The main components of libvirt are the control daemon, a stable C language API, a corresponding set of Python language bindings, and a simple shell environment. Currently, all KVM management tools (including virsh and OpenStack) use libvirt as the underlying VM control mechanism. Libvirt stores information, such as the disk image and networking configuration, in an .xml file. This file is independent of the hypervisor in use.
  • 81.
    Chapter 4. Managingand monitoring the environment 67 Figure 4-2 provides a pictorial view of the virsh interface with libivrt for virtual server management. Figure 4-2 Libvirt interface with virsh Virsh Virsh provides an easy-to-use console shell interface to the libvirt library for controlling guest instances. Each of the commands available in virsh can be used either from the virsh environment or called from a standard Linux console: To start a virsh environment, run the virsh shell program with no options. This opens a new console-like environment in which you can run any of the built-in commands for virsh. To use the virsh commands from a Linux terminal, run virsh followed by the command name and command options. Custom scripting Libvirt provides stable C Language APIs for VM management. Apart from this, libvirt supports C and C++ directly and also provides a comprehensive set of Python language bindings. By combining the libvirt API and Python, the libvirt module is intended to extend to all functions that are needed for virtual server management on KVM for IBM z Systems. 4.2 Using virsh Virsh is the main CLI for libvirt for managing virtual machines. Virsh provides many commands. We begin by describing some of the basic commands. A complete list of supported virsh commands in KVM for IBM z with detailed descriptions, is listed in KVM Virtual Server Management, SC34-2752 in the IBM Knowledge Center: https://www.ibm.com/support/knowledgecenter/linuxonibm/liaaf/lnz_r_va.html Tip: For more information about how to create scripts to manage KVM virtual machines, see http://www.ibm.com/developerworks/library/os-python-kvm-scripting1/ virsh libvirt QEMU U s e r s p a c e VM Linux - Kernel VM
  • 82.
    68 Getting Startedwith KVM for IBM z Systems 4.2.1 Basic commands This section describes basic virsh commands: define Creates a virtual server with the unique name specified in the domain configuration .xml file. start Starts a defined virtual server. Using the --console option grants initial access to the virtual server console and displays all messages that are issued to the console. shutdown Terminate a running virtual server, sending a shutdown signal to the VM. This allows proper shutdown of an operating system of a VM. destroy Immediately terminates a virtual server without any interaction with the operating system running on a VM. undefine Deletes the definition of a virtual server from libvirt. list Without an option, this lists the running virtual servers. With the --all option, this command lists all defined virtual servers. edit Opens the libvirt internal definition of a VM and allows it to be changed. These changes are not applied dynamically; they become effective after a restart of the VM. 4.2.2 Add I/O resources dynamically The virsh command attach-device enables you to add an I/O device dynamically to a VM. It requires that an .xml definition file for the device be attached as input. The following examples show how to add an I/O device to a VM. Example 4-1 shows that, before running the attach-device command, there is only one disk available in linux82. Example 4-1 Before running the attach-device command linux82:~ # ls -l /dev/vd* brw-rw---- 1 root disk 254, 0 Sep 30 12:04 /dev/vda brw-rw---- 1 root disk 254, 1 Sep 30 12:04 /dev/vda1 brw-rw---- 1 root disk 254, 2 Sep 30 12:04 /dev/vda2 brw-rw---- 1 root disk 254, 3 Sep 30 12:04 /dev/vda3 Example 4-2 shows an .xml definition of another LUN available to KVM for IBM z. It will be visible in the VM as device vdb. Example 4-2 The .xml definition <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' iothread='0'/> <source dev='/dev/mapper/360050768018305e120000000000000f2'/> <target dev='vdb' bus='virtio'/> </disk>
  • 83.
    Chapter 4. Managingand monitoring the environment 69 Example 4-3 shows the command that reads the .xml file and attaches a device to a running VM. This command attaches a disk to a VM temporarily. To make the change permanent, use the --config parameter. Example 4-3 The attach-device command [root@itsokvm1 images]# virsh attach-device linux82 add_lun_8.xml Device attached successfully Example 4-4 shows the new device vdb available to VM linux82. Example 4-4 After the attach-device linux82:~ # ls -l /dev/vd* brw-rw---- 1 root disk 254, 0 Sep 30 12:04 /dev/vda brw-rw---- 1 root disk 254, 1 Sep 30 12:04 /dev/vda1 brw-rw---- 1 root disk 254, 2 Sep 30 12:04 /dev/vda2 brw-rw---- 1 root disk 254, 3 Sep 30 12:04 /dev/vda3 brw-rw---- 1 root disk 254, 16 Sep 30 13:39 /dev/vdb brw-rw---- 1 root disk 254, 17 Sep 30 13:39 /dev/vdb1 4.2.3 VM live migration The IBM Knowledge Center article titled KVM Virtual Server Management, SC34-2752 describes the details and considerations for migrating a virtual machine to another instance of KVM for IBM z: https://www.ibm.com/support/knowledgecenter/linuxonibm/liaaf/lnz_r_va.html The most important requirement is to have equal I/O resources available in both environments. Default firewall settings on KVM for IBM z do not allow for live migration. Example 4-5 shows the commands to execute on both of the KVM for IBM z images to allow for live migration between them. Example 4-5 Setting up a firewall to allow for live migration root@itsokvm2 ~]# firewall-cmd --zone=public --add-port=49152-49215/tcp --permanent success [root@itsokvm2 ~]# firewall-cmd --reload Although we used IP addresses and not host names in the migrate command, we still needed to create records for the target KVM for IBM z in the /etc/hosts file. Otherwise, the migrate command reports an error.
  • 84.
    70 Getting Startedwith KVM for IBM z Systems Example 4-6 lists the running VMs on both KVM for IBM z images before the migration. Example 4-6 List of running VMs before live migration [root@itsokvm1 ~]# virsh list Id Name State ---------------------------------------------------- 2 linux80 running 19 instance-00000003 running 24 linux82 running [root@itsokvm2 ~]# virsh list Id Name State ---------------------------------------------------- Example 4-7 shows the actual migrate command that we executed. Example 4-7 Live migration command [root@itsokvm1 ~]# virsh migrate --live linux82 qemu+ssh://192.168.60.71/system root@192.168.60.71's password: Example 4-8 lists the running VMs on both KVM for IBM z Systems images after the migration. Example 4-8 List of running VMs after the live migration [root@itsokvm1 ~]# virsh list Id Name State ---------------------------------------------------- 2 linux80 running 19 instance-00000003 running [root@itsokvm2 ~]# virsh list Id Name State ---------------------------------------------------- 3 linux82 running 4.3 Monitoring KVM for IBM z Systems For any virtualized environment, monitoring the hypervisor resources is crucial for predicting bottlenecks and avoiding downtimes. The rest of this section focuses on monitoring and describes the steps to configure the open source monitoring tool called Nagios. 4.3.1 Configuring the Nagios monitoring tool Nagios is a monitoring tool that enables organizations to identify and resolve IT infrastructure problems before they affect the business. If there is a failure, Nagios alerts the technical staff about the problem, allowing them to begin the appropriate course of action. In KVM for IBM z Systems, Nagios monitoring is enabled using the Nagios remote plug-in executor (NRPE) which is the preferred method for remote monitoring of hosts.
  • 85.
    Chapter 4. Managingand monitoring the environment 71 The following Nagios plug-ins are enabled in KVM for IBM z Systems: Load average Disk usage Process count and resource usage The next step is to prepare the configuration file /etc/nagios/nrpe.cfg with environment-related attributes. The configuration file is backed up and then updates the attributes. Figure 4-3 The configuration file updates the attributes according to this process Configuring the Nagios server The NRPE daemon is designed to enable you to execute the Nagios plug-ins on remote Linux or UNIX machines. The main reason for doing this is to allow Nagios to monitor local resources (such as CPU load and memory usage) on remote machines. Because these public resources are not usually exposed to external machines, an agent such as NRPE must be installed on the remote Linux or UNIX machines on which the /etc/nagios/nrpe.conf file needs to be configured. See Example 4-9. Example 4-9 Attributes to change in the /etc/nagios/nrpe.conf file server_address=192.168.60.70 allowed_hosts=127.0.0.1,192.168.60.15 command[check_users]=/usr/lib64/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib64/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_hda1]=/usr/lib64/nagios/plugins/check_disk -w 20% -c 10% -p /dev/mapper/zkvm1-root command[check_zombie_procs]=/usr/lib64/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib64/nagios/plugins/check_procs -w 150 -c 200 Important: In this section, we cover only setting up the Nagios NRPE plug-in that is packaged with KVM for IBM z Systems (see Monitored Host in Figure 4-3). The NRPE daemon requires that the Nagios plug-ins be installed on the remote Linux or UNIX host. Without these, the daemon cannot monitor the nodes. For implementing the Nagios server, see the Nagios Quickstart Installation Guides website: https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/quickstart.html Nagios Server Monitored Host Nagios NRPE NRPE Check_load Check_disk KVM for IBM z Systems
  • 86.
    72 Getting Startedwith KVM for IBM z Systems Now we can start the NRPE daemon, as shown in Example 4-10. Example 4-10 Start the NRPE daemon [root@itsokvm1 nagios]# systemctl start nrpe.service After the NRPE is started in the hypervisor, as shown in Example 4-11, verify that the port (5666) used by NRPE is in a listening state. Example 4-11 Starting NRPE in KVM for IBM z Systems [root@itsokvm1 /]# systemctl status nrpe.service nrpe.service - NRPE Loaded: loaded (/usr/lib/systemd/system/nrpe.service; disabled) Active: active (running) since Tue 2015-09-29 11:43:15 EDT; 1 day 3h ago Process: 22566 ExecStart=/usr/sbin/nrpe -c /etc/nagios/nrpe.cfg -d $NRPE_SSL_OPT (code=exited, status=0/SUCCESS) Main PID: 22567 (nrpe) CGroup: /system.slice/nrpe.service ââ22567 /usr/sbin/nrpe -c /etc/nagios/nrpe.cfg -d Sep 29 11:43:15 itsokvm1.itso.ibm.com nrpe[22567]: Starting up daemon Sep 29 11:43:15 itsokvm1.itso.ibm.com systemd[1]: Started NRPE. Sep 29 11:43:15 itsokvm1.itso.ibm.com nrpe[22567]: Server listening on 192.168.60.70 port 5666. Sep 29 11:43:15 itsokvm1.itso.ibm.com nrpe[22567]: Listening for connections on port 0 Sep 29 11:43:15 itsokvm1.itso.ibm.com nrpe[22567]: Allowing connections from: 127.0.0.1,192.168.60.15 [root@itsokvm1 /]# vi [root@itsokvm1 /]# netstat -pant | grep nrpe tcp 0 0 192.168.60.70:5666 0.0.0.0:* LISTEN 22567/nrpe [root@itsokvm1 /]# Next, we need to check whether the NRPE daemon is functioning properly. Execute the check_nrpe plug-in. The plug-in becomes packaged with the Nagios tool for testing purposes. From the Nagios server, execute the command shown in Example 4-12 with the IP address of the server that needs to be monitored. Example 4-12 Verification of NRPE communication with other remote hosts [root@monitoring ~]# /usr/local/nagios/libexec/check_nrpe -H 192.168.60.70 NRPE v2.15 [root@monitoring ~]#
  • 87.
    Chapter 4. Managingand monitoring the environment 73 Configuring the remote host (monitored) Next, create a few object definitions so that you can monitor the remote Linux or UNIX machine. We created a host.cfg file, shown in Example 4-13, in which our zKVM Hypervisor template definition is inheriting default values from the generic-host template. We also defined a new host for the remote itsozkvm1 and itsozkvm2 hosts that references our newly created zKVM Hypervisor host template. Example 4-13 The host.cfg file with object definitions [root@monitoring etc]# pwd /usr/local/nagios/etc [root@monitoring etc]# cat hosts.cfg ## Default Linux Host Template ## define host{ name zKVM HyperVisor ; Name of this template use generic-host ; Inherit default values check_period 24x7 check_interval 5 retry_interval 1 max_check_attempts 10 check_command check-host-alive notification_period 24x7 notification_interval 30 notification_options d,r contact_groups admins register 0 ; DONT REGISTER THIS - ITS A TEMPLATE } ## Default define host{ use zKVM HyperVisor ; Inherit default values from a template host_name itsozkvm1 ; The name we're giving to this server alias IBM KVM ; A longer name for the server address 192.168.60.70 ; IP address of Remote Linux host } ## Default define host{ use zKVM HyperVisor ; Inherit default values from a template host_name itsozkvm2 ; The name we're giving to this server alias IBM KVM ; A longer name for the server address 192.168.60.71 ; IP address of Remote Linux host } [root@monitoring etc]#
  • 88.
    74 Getting Startedwith KVM for IBM z Systems Next, define the built-in services for monitoring host system resources, as shown in Example 4-14. Example 4-14 Define the services that monitor system resources [root@monitoring etc]# cat services.cfg define service{ use generic-service host_name itsozkvm1 service_description CPU Load check_command check_nrpe!check_load } define service{ use generic-service host_name itsozkvm1 service_description Total Processes check_command check_nrpe!check_total_procs } define service{ use generic-service host_name itsozkvm1 service_description Current Users check_command check_nrpe!check_users } define service{ use generic-service host_name itsozkvm1 service_description SSH Monitoring check_command check_nrpe!check_disk } define service{ use generic-service host_name itsozkvm1 service_description FTP Monitoring check_command check_nrpe!check_procs } [root@monitoring etc]#
  • 89.
    Chapter 4. Managingand monitoring the environment 75 After restarting Nagios services on the monitoring host, we can log in to the Nagios web interface and see the new host and the service definitions for the remote KVM host included in Nagios monitoring. In our case, these are itsokvm1 and itsokvm2, as shown in Figure 4-4. Figure 4-4 Map of remote hosts managed by Nagios monitoring Within a minute or two, Nagios shows the current status information for the KVM for IBM z Systems host resources, in our case itsokvm1 and itsokvm2 KVM on IBM z System hosts, as shown in Figure 4-5. Figure 4-5 Remote host status
  • 90.
    76 Getting Startedwith KVM for IBM z Systems
  • 91.
    © Copyright IBMCorp. 2015. All rights reserved. 77 Chapter 5. Building a cloud environment This chapter provides an overview of a reference implementation of IBM Cloud Manager with OpenStack for KVM on IBM z Systems. We address the following topics: Overview of IBM Cloud Manager with OpenStack V4.3 Installing, deploying, and configuring KVM on a cloud based on IBM z Systems 5
  • 92.
    78 Getting Startedwith KVM for IBM z Systems 5.1 Overview of IBM Cloud Manager with OpenStack V4.3 In general, based on where organizations deploy cloud services and who can access these services, there are two main types of cloud-computing models: public cloud and private cloud. In public clouds, an organization offers resources as a service, usually over an internet connection, typically for a pay-per-usage fee. In private clouds, the organization deploys resources inside a firewall and self-manages those resources. Here, the resources and services are not shared outside of the organization. This chapter describes how to build a private cloud. IBM provides a complete ecosystem and tools for building a highly effective private cloud, for which the following factors need to be considered: Security Resilience Performance Scalability of thousands of nodes Openness and heterogeneity Interoperability IBM z Systems is the ideal platform for an effective private cloud based on the following requirements: Openness and heterogeneity The inherent strengths of IBM Cloud Manager with OpenStack and KVM for IBM z Systems Reliability, availability, and serviceability (RAS) features The fundamental building blocks of IBM z Systems 5.1.1 IBM Cloud Manager with OpenStack version 4.3 IBM Cloud Manager with OpenStack is an easy-to-deploy, simple-to-use cloud management software offering based on OpenStack with IBM enhancements. IBM Cloud Manager features an IBM Self-Service portal for workload provisioning, virtual image management, and monitoring. It is an innovative, cost-effective approach that also includes automation, metering, and security for your virtualized environment. IBM Cloud Manager with OpenStack supports KVM for IBM z Systems compute nodes. KVM for IBM z Systems compute nodes must run in a z Systems logical partition. The KVM for IBM z Systems compute node must satisfy the following requirements: Operating system: KVM for IBM z Systems version 1.1 Hardware: zEC12/zBC12 or later Important: Support for KVM for IBM z Systems has been included with Fix Pack 3 of IBM Cloud Manager V4.3. For further details about KVM for IBM z Systems prerequisites and support, see these IBM Knowledge Center pages: KVM for IBM z Systems prerequisites http://ibm.co/1Lpru5Z KVM for IBM z Systems http://www.ibm.com/support/knowledgecenter/SSNW54_1.1.0
  • 93.
    Chapter 5. Buildinga cloud environment 79 For IBM z Systems, IBM Cloud Manager transforms an installation of KVM on IBM z Systems and the required storage and network infrastructure into an entry level private cloud solution that provides the following functions: Self-service portal Automated provisioning of virtual machines (VMs) Automated deprovisioning of VMs Cloning and snapshots of workloads Starting and stopping of VMs Resizing existing VMs Approval lifecycle Email notifications Billing and accounting 5.1.2 Environmental setup As a starting point for our IBM Cloud Manager deployment, we reuse the same KVM host that was deployed in Chapter 3, “Installing and configuring the environment” on page 27. Also, our KVM for IBM z Systems host has met the required networking and storage prerequisites. The next task is to set up a network topology. IBM Cloud Manager with OpenStack solution provides a few predefined example topologies. The following topologies are supported with IBM Cloud Manager with OpenStack V4.3. Table 5-1 Supported topologies Note: We suggest using the information from the follow web page to review the required common tasks for getting started with IBM Cloud Manager with OpenStack: Worksheet: Common production-level topologies http://ibm.co/1MYaPYn Topology Description Minimal For product evaluation purposes. This topology is the simplest topology and does not require any customization. Some basic customization is supported for the KVM quick emulator (QEMU) compute hypervisor type. Controller +n compute For smaller test or production environments. This topology provides a single controller node, plus any number of compute nodes. You can configure this topology for your specific needs, for example, you can configure networking, the resource scheduler, and other advanced customizations. HA controller +n compute For larger test and production environments that require high availability (HA) cloud controllers. This topology provides multiple HA controller nodes, plus any number of compute nodes. You can configure this topology for your specific needs. Distributed database For larger test or production environments. This topology is similar to the controller +n compute topology; however, the distributed database topology allows the IBM Cloud Manager with OpenStack database service to run on a separate node. It also supports advanced customization.
  • 94.
    80 Getting Startedwith KVM for IBM z Systems Controller node with multiple compute nodes In this publication, we are setting up a cloud topology using a single controller node with multiple compute nodes. Here, the controller node runs the basic OpenStack services, including the KVM/QEMU Nova compute service. Figure 5-1 provides a high-level view of the topology we used. Figure 5-1 Topology - single controller with multiple compute nodes When deploying the topology in Figure 5-1, we suggest using a two node configuration, in which one node is the deployment server, and the other is the IBM Cloud Manager with OpenStack single controller node. If the KVM/QEMU, KVM for z Systems, PowerKVM, Hyper-V, or IBM z/VM compute hypervisor is used, then one or more systems is also required to provide the IBM Cloud Manager with OpenStack compute nodes for the topology. The IBM Cloud Manager controller nodes have significant CPU and memory requirements, as they contain, at a high level, the Chef client, the IBM Cloud Manager, and the Self-Service portal. Multi-region This topology is for larger test or production environments and can include multiple hypervisor environments. This topology is similar to the controller +n compute topology; however, you can separate hypervisors by region. Each region has its own controller, but shares the same OpenStack Keystone architecture and potentially the IBM Cloud Manager Dashboard. Topology Description Chef Server Solution Delivery Repo Single Controller Chef Client IBM Cloud Manager Deployment Server Database Keystone Nova Cinder Message Broker Glance Neutron Optional Service Self-Service Portal Compute Node Compute Node Compute Node Chef Client Move-compute Cellometer Neutron Service
  • 95.
    Chapter 5. Buildinga cloud environment 81 Table 5-2 provides system information about the controller and compute node in our environment. Table 5-2 Controller and compute node environment information 5.2 Installing, deploying, and configuring KVM on a cloud based on IBM z Systems The process of deploying IBM Cloud Manager V4.3 is accomplished using a Chef server. Chef is an open source automation framework for deploying resources on systems. The Chef server code is included in the installation package for IBM Cloud Manager with OpenStack. The following sections provide high-level steps for the cloud deployment process. 5.2.1 Installing and update IBM Cloud Manager with OpenStack V4.3 In this section, we install and update IBM Cloud Manager with OpenStack V4.3, which requires the following steps: 1. Complete the prerequisites and create a YUM repository. 2. Install IBM Cloud Manager with OpenStack on the deployment server 3. Update the Chef server software from the Select Fixes web page on IBM Fix Central: http://ibm.co/1NaGqdL 4. Verify that the Chef server is installed and running. 5.2.2 Deploying the IBM Cloud Manager topology To deploy the IBM Cloud Manager topology, complete these steps: 1. Set up the RHEL 7.1YUM repository for reference by the Chef server. 2. Create and edit the environment file. 3. Create and edit the topology file. 4. Create, edit, and upload the data bags (if needed). 5. Deploy the topology. 6. Verify the deployment. 7. Configure IBM Cloud Manager with OpenStack V4.3. For installation steps pertaining to IBM Cloud Manager with OpenStack, see Appendix B, “Installing IBM Cloud Manager with OpenStack” on page 97. Controller node Compute node Operating System RHEL 7.1 KVM 1.1 for IBM z System z Interfaces enp3s0 enccw0.0.2d00 Hostname controller.itso.ibm.com itoszkvm1.itso.ibm.com IP Address 1 192.168.60.16 192.168.60.70
  • 96.
    82 Getting Startedwith KVM for IBM z Systems 5.2.3 Creating a cloud environment An environment is a way to map an organization’s real-life workflow to what can be configured and managed when using the Chef server. In the following sections, we describe the steps required to create your own cloud environment. 5.2.4 Environment templates IBM Cloud Manager with OpenStack V4.3 has several prepackaged environments. Using the knife command, we can list the environmental templates that are available. See Example 5-1. Example 5-1 List the default environmental templates [root@controller ICM43]# knife environment list _default example-ibm-os-allinone example-ibm-os-ha-controller-n-compute example-ibm-os-single-controller-n-compute example-ibm-sce [root@controller ICM43]# New cloud environment creation Create a directory in the deployment node for storing environment and other topology files. This directory is used by the Chef server for deployment purposes. In Example 5-2, we have copied the templates for the single controller+n compute topology that we are going to deploy. Example 5-2 Create your own environment [root@controller itso_env]# knife environment show example-ibm-os-single-controller-n-compute -d -Fjson > itso_cldenv.json With the environment created (see Example 5-2), we can change the following attributes in the new itso_env.json file: Environment name openstack.endpoints.host openstack.endpoints.bind-host openstack.endpoints.mq.hos openstack.endpoints.db.host ibm-sce.self-service.bind_interface openstack.compute.virt_type openstack.network.openvswitch.tenant_network_type = "gre" openstack.network.openvswitch.bridge_mappings = "" openstack.network.openvswitch.network_vlan_ranges = "" openstack.network.openvswitch.bridge_mapping_interface = "" openstack.network.ml2.tenant_network_types = "gre" openstack.network.ml2.network_vlan_ranges = "" openstack.network.ml2.flat_networks = "" Tip: For the latest information about the attributes and parameters specific to KVM for IBM z Systems, see Deploying an advanced configuration with KVM for IBM z Systems: http://ibm.co/1MyQiMZ
  • 97.
    Chapter 5. Buildinga cloud environment 83 Example 5-3 shows the results. Example 5-3 Attributes that have been changed in the new environment json { "name": "itso_zkvm", : "endpoints": { "host": "192.168.60.16", "identity-admin": { "port": "35357" : "bind-host": "192.168.60.16", "mq": { "host": "192.168.60.16", "port": "5671" : "openstack": { "endpoints": { "network-openvswitch": { "bind_interface": "ens192" }, "compute-vnc-bind": { "bind_interface": "ens192" }, "compute-vnc-proxy-bind": { "bind_interface": "ens192" }, "compute-serial-console-bind": { "bind_interface": "ens192" } : "ml2": { "type_drivers": "local,flat,vlan,gre,vxlan", "tenant_network_types": "gre", "mechanism_drivers": "openvswitch", "flat_networks": "", "network_vlan_ranges": "", "tunnel_id_ranges": "1:1000", "vni_ranges": "1001:2000" }, "openvswitch": { "tenant_network_type": "gre", "network_vlan_ranges": "", "enable_tunneling": "True", "tunnel_type": "gre", "tunnel_id_ranges": "1:1000", "veth_mtu": 1500, "tunnel_types": "gre,vxlan" },
  • 98.
    84 Getting Startedwith KVM for IBM z Systems Register the environment with the Chef server After the relevant changes have been made to the environment JSON file, we can register the environment with the Chef server. See Example 5-4. Example 5-4 Registering the environment with the Chef server [root@controller itso_env]# knife environment from file itso_cldenv.json Updated Environment itso_zkvm [root@controller itso_env]# knife environment list _default example-ibm-os-allinone example-ibm-os-ha-controller-n-compute example-ibm-os-single-controller-n-compute example-ibm-sce itso_zkvm [root@controller itso_env]# 5.2.5 Creating a controller topology Now we can proceed with creating a controller topology. In doing so, we provide details about the following items in an .xml topology file: The controller node host name and authentication details Which environment the specific controller node conforms to The role the controller node will act as Other optional components to deploy, such as the IBM Self-Service Portal Example 5-5 Creating a controller topology [root@controller itso_env]# cat cntrltop.json { "name":"cntrltop", "description":"topology definition for ITSO demo", "environment":"itso_zkvm", "secret_file":"/opt/ibm/cmwo/chef-repo/data_bags/example_data_bag_secret", "run_sequentially":false, "nodes": [ { "fqdn":"controller.itso.ibm.com", "password":"password", "quit_on_error":true, "run_order_number":1, "runlist": [ "role[ibm-os-single-controller-node]", "role[ibm-sce-node]" ] } ] }
  • 99.
    Chapter 5. Buildinga cloud environment 85 Deploying the controller topology file With the controller topology file created, we can deploy the topology using the Chef server. From the deployment server, the Chef server authenticates to the controller node and starts the deployment of various components of IBM Cloud Manager with OpenStack. Example 5-6 Deploying the controller node topology [root@controller itso_env]# knife os manage deploy topology cntrltop.json Deploying topology 'cntrltop' ... The topology nodes are being deployed. Deploying to nodes with run_order_number '1' in parallel. Bootstrapping nodes... Bootstrapping node ... Doing old-style registration with the validation key at /root/.chef/ibm-validator.pem... Delete your validation key in order to use your user credentials instead Connecting to controller.itso.ibm.com controller.itso.ibm.com Starting Chef Client on Node controller.itso.ibm.com Bootstrapping Node controller.itso.ibm.com Synchronizing Cookbooks controller.itso.ibm.com Compiling Cookbooks Deploying bootstrapped nodes... Writing FIPS setting to environment 'itso_zkvm' Setting run list for node controller.itso.ibm.com... controller.itso.ibm.com: run_list: role[ibm-os-single-controller-node] role[ibm-sce-node] controller.itso.ibm.com Converging Node controller.itso.ibm.com Synchronizing Cookbooks controller.itso.ibm.com Compiling Cookbooks controller.itso.ibm.com Running Recipe chef_handler::default : : : controller.itso.ibm.com Running Recipe openstack-bare-metal::api controller.itso.ibm.com Running Recipe apache2::default controller.itso.ibm.com Running Recipe ibm-sce::installfp controller.itso.ibm.com Completed All nodes with run_order_number '1' deployed. Results for deploy of topology 'cntrltop' Results for nodes with run_order_number '1' Deploy of node at controller.itso.ibm.com was successful. Deploy of topology 'cntrltop.json' completed in 9708 seconds. [root@controller itso_env]#
  • 100.
    86 Getting Startedwith KVM for IBM z Systems Verifying the controller node With the deployment of the controller node completed, we need to verify that all of the OpenStack services and components are properly deployed and working, as shown in Example 5-7. Example 5-7 Verification of Nova services [root@controller etc]# nova-manage service list Binary Host Z Zone Status State Updated_At nova-conductor controller.itso.ibm.com internal enabled XXX 2015-09-24 14:33:50.450389 nova-scheduler controller.itso.ibm.com internal enabled XXX 2015-09-24 14:34:11.488662 nova-consoleauth controller.itso.ibm.com internal enabled XXX 2015-09-24 14:33:54.926035 nova-cert controller.itso.ibm.com internal enabled XXX 2015-09-24 14:34:05.490160 [root@controller etc]# We have tailored various attributes in the environment JSON file. One of these is to use gre and openvswitch for connectivity. Therefore, during deployment of the controller node, the Chef server automatically converts the Ethernet flat network to a bridge. The Chef server also enables, configures, and couples the Open vSwitch ports for connectivity. The result is shown in Example 5-8. Example 5-8 Open vSwitch network configuration [root@controller ~]# ifconfig br-ex: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.60.15 netmask 255.255.255.0 broadcast 192.168.60.255 inet6 fe80::216:41ff:feed:3cbd prefixlen 64 scopeid 0x20<link> ether 00:16:41:ed:3c:bd txqueuelen 0 (Ethernet) RX packets 1574 bytes 162929 (159.1 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 874 bytes 307373 (300.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::216:41ff:feed:3cbd prefixlen 64 scopeid 0x20<link> ether 00:16:41:ed:3c:bd txqueuelen 1000 (Ethernet) RX packets 2077 bytes 201279 (196.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 898 bytes 310586 (303.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 16 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 106408 bytes 14737594 (14.0 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 106408 bytes 14737594 (14.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@controller ~]# ovs-vsctl show
  • 101.
    Chapter 5. Buildinga cloud environment 87 142779c0-fa4f-484f-ab1f-920642e9cdba Bridge br-tun fail_mode: secure Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port br-tun Interface br-tun type: internal Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Bridge br-ex Port br-ex Interface br-ex type: internal Port "enp3s0" Interface "enp3s0" ovs_version: "2.3.0" [root@controller ~]# 5.2.6 Creating a compute node topology With the successful deployment of the controller node, let’s deploy the KVM for IBM z Systems compute node. Similar to what we did in Step 5.2.5, “Creating a controller topology” on page 84 we need to create a topology for the compute node by providing the following details: Compute node host name and other authentication details The environment that the specific controller node conforms to The role of compute node, for example, ibm-os-compute-node-kvmibm We also need to create a node-specific network attribute file. This file is only required because the attributes of our compute node network are different from those defined in our itso_zKVM environment file, itso_cldenv.json. For example, our controller node network interface is ens192, but on the compute node we have a different network interface (enccw0.0.2d00). So, using the attribute file as Example 5-9 on page 88 shows, we can specify node-specific attributes. Attention: Use care when providing the required attributes in the topology files, Some customization options might not be supported for all hypervisor types, and some cannot be configured after you deploy your cloud environment.
  • 102.
    88 Getting Startedwith KVM for IBM z Systems Example 5-9 System and network attributes file for the KVM for IBM z Systems compute node [root@controller itso_env]# cat zkvmtop.json { "name":"zkvmtop", "description":"topology definition for zkvm", "environment":"itso_zkvm", "secret_file":"/opt/ibm/cmwo/chef-repo/data_bags/example_data_bag_secret", "run_sequentially":false, "nodes": [ { "fqdn":"itsokvm1.itso.ibm.com", "password":"zlinux", "quit_on_error":true, "run_order_number":1, "runlist": [ "role[ibm-os-compute-node-kvmibm]" ], "attribute_file":"zkvm-attr.json" } ] } [root@controller itso_env]# cat zkvm-attr.json { "openstack": { "endpoints": { "network-openvswitch": { "bind_interface": "enccw0.0.2d00" }, "compute-vnc-bind": { "bind_interface": "enccw0.0.2d00" }, "compute-vnc-proxy-bind": { "bind_interface": "enccw0.0.2d00" }, "compute-serial-console-bind": { "bind_interface": "enccw0.0.2d00" } } } }
  • 103.
    Chapter 5. Buildinga cloud environment 89 Compute node deployment After customizing the topology and attribute files, proceed with the deployment of the compute node topology, as shown in Example 5-10. Example 5-10 Deploying the compute node topology using knife [root@controller itso_env]# knife os manage deploy topology zkvmtop.json Deploying topology 'zkvmtop' ... The topology nodes are being deployed. Deploying to nodes with run_order_number '1' in parallel. Bootstrapping nodes... Bootstrapping node ... Doing old-style registration with the validation key at /root/.chef/ibm-validator.pem... Delete your validation key in order to use your user credentials instead Connecting to itsokvm1.itso.ibm.com itsokvm1.itso.ibm.com Starting Chef Client on Node itsokvm1.itso.ibm.com Bootstrapping Node itsokvm1.itso.ibm.com Synchronizing Cookbooks itsokvm1.itso.ibm.com Compiling Cookbooks Deploying bootstrapped nodes... Setting run list for node itsokvm1.itso.ibm.com... itsokvm1.itso.ibm.com: run_list: role[ibm-os-compute-node-kvmibm] itsokvm1.itso.ibm.com Converging Node itsokvm1.itso.ibm.com Synchronizing Cookbooks itsokvm1.itso.ibm.com Compiling Cookbooks itsokvm1.itso.ibm.com Running Recipe chef_handler::default itsokvm1.itso.ibm.com Running Recipe ibm-openstack-common::cmwo-version : : : itsokvm1.itso.ibm.com Running Recipe openstack-network::openvswitch itsokvm1.itso.ibm.com Running Recipe openstack-telemetry::agent-compute itsokvm1.itso.ibm.com Completed All nodes with run_order_number '1' deployed. Results for deploy of topology 'zkvmtop' Results for nodes with run_order_number '1' Deploy of node at itsokvm1.itso.ibm.com was successful. Deploy of topology 'zkvmtop.json' completed in 139 seconds. [root@controller itso_env]# Important: As a prerequisite, the repository of the compute node has to be enabled and recognized by the YUM repository.
  • 104.
    90 Getting Startedwith KVM for IBM z Systems 5.2.7 Cloud environment verification In this section, we verify that OpenStack services were successfully deployed. Compute service From the controller node, execute the Nova service command to confirm that the compute node is now deployed and managed by IBM Cloud Manager with OpenStack. Example 5-11 Nova service list [root@controller itso_env]# source ~/openrc [root@controller itso_env]# nova service-list +----+------------------+-------------------------+----------+---------+-------+--------------------------- | Id | Binary | Host | Zone | Status | State | Updated_at +----+------------------+-------------------------+----------+---------+-------+--------------------------- | 1 | nova-conductor | controller.itso.ibm.com | internal | enabled | up | 2015-09-29T02:13:11.644559 | - | | 4 | nova-scheduler | controller.itso.ibm.com | internal | enabled | up | 2015-09-29T02:13:13.412254 | - | | 5 | nova-consoleauth | controller.itso.ibm.com | internal | enabled | up | 2015-09-29T02:13:20.666483 | - | | 6 | nova-cert | controller.itso.ibm.com | internal | enabled | up | 2015-09-29T02:13:17.528850 | - | | 21 | nova-compute | itsokvm1.itso.ibm.com | nova | enabled | up | 2015-09-29T02:13:13.291343 | - |+----+------------------+-------------------------+----------+---------+-------+-------------------------- Network services Every network service or extension in the cloud environment registers itself with the Neutron server when the server or extension starts. For this reason, it is best to determine whether the compute node network agents are registered with the controller (see Example 5-12). To an extent, this also verifies that the environment is deployed correctly. Example 5-12 Neutron agent list [root@controller ~]# neutron agent-list +--------------------------------------+--------------------+-------------------------+-------+------------ | id | agent_type | host | alive | admin_state_up | binary | +--------------------------------------+--------------------+-------------------------+-------+------------ | b0cacb05-a5c7-4a50-9c18-e1646a8ba950 | DHCP agent | controller.itso.ibm.com | :-) | True | neutron-dhcp-agent | | 67e56dfa-9f0d-432e-b8c7-b17ef42516d1 | L3 agent | controller.itso.ibm.com | :-) | True | neutron-l3-agent | | e1be4b6c-9855-4a43-ab2a-a8d76db61cfa | Metadata agent | controller.itso.ibm.com | :-) | True | neutron-metadata-agent | | f84df0fc-e243-4375-9482-217efc73d1e4 | Open vSwitch agent | controller.itso.ibm.com | :-) | True | neutron-openvswitch-agent | | 4ceb8347-5b0f-46a3-98e8-10ef1c2428e4 | Loadbalancer agent | controller.itso.ibm.com | :-) | True | neutron-lbaas-agent | | 23e194b5-f825-4bca-9db2-407065a9b569 | Open vSwitch agent | itsokvm1.itso.ibm.com | :-) | True | neutron-openvswitch-agent | +--------------------------------------+--------------------+-------------------------+-------+----------------+-------------------------- Important: The Nova compute node must have a status of enabled, as shown in Example 5-11. Otherwise, the controller will not communicate with the compute node.
  • 105.
    Chapter 5. Buildinga cloud environment 91 5.2.8 Accessing IBM Cloud Manager 4.3 with OpenStack When the deployment is complete, the services of IBM Cloud Manager with OpenStack are ready to use. The IBM Cloud Manager Dashboard is available at this location: https://controller.<domainname> Where <domainname> is the fully qualified domain name of the controller node in your topology The IBM Self-Service user interface is accessible at this location: https://controller.<domainname>:8080 Figure 5-2 shows the IBM Cloud Manager V4.3 dashboard. Figure 5-2 IBM Cloud Manager with OpenStack Dashboard IBM Cloud Manager virtual machine deployment After deploying the components for creating a cloud environment, we need to set up the network for the VM to use. With the IBM Cloud Manager Dashboard, you can create several types of networks for the VMs. The type of network you create depends on the type of network connectivity and the hypervisor you are using. There are three processes to carry out, which are described in the following sections: “Create a network” on page 91 “Upload the image to the cloud” on page 93 “Launch an instance for deployment” on page 94 Create a network To create a network and specify the network provider settings: 1. Log in to the dashboard, and select Admin > System Panel > Networks. 2. Click Create Network, and the window shown in Figure 5-3 on page 92 opens. (You cannot create a subnet by using this method. The subnet is created in the next step.)
  • 106.
    92 Getting Startedwith KVM for IBM z Systems Figure 5-3 Creating a gre network 3. After the network is created, create a subnet by clicking the newly created network and providing requested the network information about the new cloud environment. See Figure 5-4. Figure 5-4 Adding a subnet and entering network information
  • 107.
    Chapter 5. Buildinga cloud environment 93 Using virsh, we installed and created SLES 12 Linux QCOW2 images. For more information about creating QCOW2 images, see 3.3.1, “Preparing the environment” on page 58 Cloud-init Cloud-init is a multi-distribution package that handles early initialization of a cloud instance. The cloud-init software package is supported by IBM Cloud Manager with OpenStack and can be used to pass boot-time customization of virtual images (for example: server metadata, user data, personality files, and SSH keys). The config drive can be accessed by any guest operating system capable of mounting an ISO9960 file system. Images that are built with a recent version of the cloud-init software package, can automatically access and apply the supported customization values that are passed to the instance by the config drive. Download the cloud-init .tar file from the Launchpad website: https://launchpad.net/cloud-init/+download You will also need the setuptools package installed in your target system for cloud-init to work. For more information about setuptools, see the following website: https://pypi.python.org/pypi/setuptools Upload the image to the cloud With the network setup completed, you can upload the Linux image to the cloud. To do so: 1. Log in to the dashboard and select Admin > System Panel > Images. 2. Click Create Image. (Note that you cannot create a subnet by using this method. The subnet is created in the next step.) The window displays as shown in Figure 5-5. Figure 5-5 Importing an image file to the cloud
  • 108.
    94 Getting Startedwith KVM for IBM z Systems 3. With the relevant information provided, click OK to import the image to IBM Cloud Manager. Launch an instance for deployment After the image is imported, you can launch an instance for deployment: 1. Log in to the dashboard, select Projects > Compute Panel > Instances. 2. Click Launch Image, and the window shown in Figure 5-6 opens. Figure 5-6 Launch instance 3. After the deployment, click Project> Compute > Instances and notice that the instance is listed. The instance will also be listed in the IBM Self-Service Portal, as shown in Figure 5-7. Figure 5-7 IBM Self-Service portal
  • 109.
    © Copyright IBMCorp. 2015. All rights reserved. 95 Appendix A. Installing KVM for IBM z Systems with ECKD devices This appendix describes some of the differences between KVM for IBM z Systems installation on Small Computer System Interface (SCSI) devices as shown in 3.2, “Setting up KVM for IBM z Systems” on page 31 and installation on ECKD devices. Parameter file It is possible to specify ECKD devices in the .prm file the same way that we did for SCSI devices in 3.2.1, “Preparing the .ins and .prm files” on page 32. Example A-1 shows a parameter file that specifies ECKD devices for the installer. Example A-1 Parameter file ro ramdisk_size=40000 rd.dasd=0.0.6500,0.0.6501 rd.znet=qeth,0.0.2d00,0.0.2d01,0.0.2d02,layer2=1,portno=0,portname=DUMMY ip=192.168.60.71::192.168.60.1:255.255.255.0:itsokvm2:enccw0.0.2d00:none inst.repo=ftp://ftp:ftp@192.168.60.15/KVM/DVD1 The rd.dasd statement defines two ECKD devices. All other statements are the same as for SCSI installation. A
  • 110.
    96 Getting Startedwith KVM for IBM z Systems Figure A-1 shows the results during KVM for IBM z Systems installation. It is not possible to add ECKD devices in this panel. They must be defined in the parameter file. Figure A-1 Devices for installation
  • 111.
    © Copyright IBMCorp. 2015. All rights reserved. 97 Appendix B. Installing IBM Cloud Manager with OpenStack This appendix describes the steps required to install IBM Cloud Manager with OpenStack. Prerequisites Before installing IBM Cloud Manager with OpenStack, be sure that the prerequisites are met. Yum repository The first and foremost prerequisite that needs to be met before deployment is to create repositories for the controller node OS and its optional operating system packages. If you are not connected to a network for downloading the repositories from the RedHat website (http://redhat.com), you have the option to create your own local repositories using the RHEL7.1 repository and the optional RHEL 7.1 packages (see Example B-1). Example B-1 Local yum repository [root@controller ~]# yum repolist Loaded plugins: langpacks, product-id, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. repo id repo name Status local RHEL 7.1 linux yum repository 4,371 optional RHEL 7.1 linux Optional yum repository 3,194 repolist: 8,565 B
  • 112.
    98 Getting Startedwith KVM for IBM z Systems Host name The host where you install the controller server must have a fully qualified domain name that includes the domain suffix. For example, the fully qualified domain name would be mydeploymentserver.ibm.com rather than mydeploymentserver. To verify that the controller and compute node have fully qualified domain names, use the command shown in Example B-2. Example B-2 Verification of fully qualified domain name [root@controller ~]# hostname controller.itso.ibm.com [root@controller ~]# The host name of the controller system must be added to the DNS system. To verify that the host name is resolvable, issue the command shown in Example B-3. Example B-3 Verification of resolvable host name [root@controller ~]# hostname -f controller.itso.ibm.com [root@controller ~]# Security-Enhanced Linux (SELinux) For ease of deployment, we dynamically disabled SELinux in the controller node, as shown in Example B-4. Example B-4 Disabling SELinux [root@controller ICM43]# getenforce Enforcing [root@controller ICM43]# setenforce Permissive [root@controller ICM43]# getenforce Permissive [root@controller ICM43]# Network Time Protocol Another important prerequisite is to ensure that you synchronize the deployment server with the Network Time Protocol (NTP) server. Before you can deploy the cloud, you need to ensure that all of the nodes are synchronized with the NTP server. If the NTP server is not available or cannot be connected to, synchronize the time across the controller and compute nodes manually. Some deviation is acceptable.
  • 113.
    Appendix B. InstallingIBM Cloud Manager with OpenStack 99 Installing IBM Cloud Manager 4.3 To install IBM Cloud Manager, you need to either download installation packages, as we did for examples in this book, or order a DVD that is specific to the platform on which the controller will be installed. After the installer packages are downloaded, we provide execute permission to all of the installable packages in the directory, as shown in Example B-5. Example B-5 IBM Cloud Manager with OpenStack 4.3 installable packages [root@controller ICM43]# ls -lh total 12G -rwxrwxrwx. 1 root root 5.5G Jun 30 06:04 cmwo430_xlinux_install.bin -rwxrwxrwx. 1 root root 409 Jun 30 04:39 cmwo_4.3.lic -rwxrwxrwx. 1 root root 2.8G Jul 2 23:39 cmwo_fixpack_4.3.0.1.tar.gz -rwxrwxrwx. 1 root root 3.5G Sep 14 04:06 cmwo_fixpack_4.3.0.3.tar.gz -rwxrwxrwx. 1 root root 3.0K Jun 30 04:38 cmwo-install-sample.rsp -rwxrwxrwx. 1 root root 59M Jun 30 04:39 IBM Cloud Manager with OpenStack Hyper-V Agent.msi -rwxrwxrwx. 1 root root 145K Jun 30 04:39 readme.pdf -rwxrwxrwx. 1 root root 8.0K Jun 30 04:39 readme.txt [root@controller ICM43]# To install, run the Cloud Manager with OpenStack 4.3 binary installable package, as shown in Example B-6. The process takes you through interactive steps during the installation. Example B-6 IBM Cloud Manager with OpenStack 4.3 Installer [root@controller ICM43]# ./cmwo430_xlinux_install.bin Preparing to install... Extracting the JRE from the installer archive... Unpacking the JRE... Extracting the installation resources from the installer archive... Configuring the installer for this system's environment... Launching installer... =============================================================================== Choose Locale... ---------------- 1- Deutsch ->2- English ... CHOOSE LOCALE BY NUMBER: 2 Important: IBM Cloud Manager with OpenStack 4.3 packages are available only for x86_64 and ppc64 platforms. At the time of writing, z Systems drivers are supported only as compute node services.
  • 114.
    100 Getting Startedwith KVM for IBM z Systems Further in the process, after the terms and conditions are accepted, the installer displays a preinstallation summary, as shown in Example B-7. Example B-7 Installer Pre-Installation Summary =============================================================================== Pre-Installation Summary ------------------------ Please Review the Following Before Continuing: Product Name: IBM Cloud Manager with OpenStack Install Folder: /opt/ibm/cmwo PRESS <ENTER> TO CONTINUE: Continuing with the interactive process, at one point the installer begins the installation of packages in the controller server. It takes a while for the installation to complete. Example B-8 shows the notification that installation is complete. Example B-8 Successful installation message and next step from the installer =============================================================================== Installation Complete --------------------- The deployment server for IBM Cloud Manager with OpenStack has been successfully installed to: /opt/ibm/cmwo The next step is to select a topology and deploy the components that are necessary to create your cloud environment. To deploy from a web browser, use the following URL to Launch IBM Cloud Manager - Deployer: https://controller.itso.ibm.com:8443 To deploy from the command line, go to IBM Knowledge Center, select the product release, and see the deployment section.
  • 115.
    Appendix B. InstallingIBM Cloud Manager with OpenStack 101 Verifying IBM Cloud Manager installation The installation was successful, as shown in Example B-8 on page 100. However, a suggested practice is to view the logs for any errors or warnings. The installation logs are available in the /opt/ibm/cmwo/_installation/Logs directory and can typically be identified uniquely by the date and time of the installation. Example B-9 shows a sample log file. Example B-9 Installation log location [root@controller Logs]# pwd /opt/ibm/cmwo/_installation/Logs [root@controller Logs]# ls -la total 672 drwxrwxr-x. 2 root root 4096 Sep 22 16:47 . drwxr-x---. 3 root root 4096 Sep 22 16:47 .. -rwxr-xr-x. 1 root root 677109 Sep 22 16:47 IBM_Cloud_Manager_with_OpenStack_Install_09_22_2015_16_28_25.log [root@controller Logs]# Next, verify that the Chef server is installed properly and is running without any issues. Example B-10 shows a sample. Example B-10 Chef server status [root@controller ~]# chef-server-ctl status run: bookshelf: (pid 4824) 767s; run: log: (pid 30281) 1135s run: nginx: (pid 4860) 767s; run: log: (pid 30451) 1131s run: oc_bifrost: (pid 4866) 766s; run: log: (pid 29992) 1142s run: oc_id: (pid 4896) 766s; run: log: (pid 30029) 1141s run: opscode-erchef: (pid 4928) 765s; run: log: (pid 30323) 1134s run: opscode-expander: (pid 4934) 764s; run: log: (pid 30216) 1137s run: opscode-expander-reindexer: (pid 4949) 764s; run: log: (pid 30224) 1136s run: opscode-solr4: (pid 4959) 763s; run: log: (pid 30105) 1138s run: postgresql: (pid 4966) 763s; run: log: (pid 29947) 1143s run: rabbitmq: (pid 4973) 763s; run: log: (pid 29909) 1149s run: redis_lb: (pid 5067) 762s; run: log: (pid 30428) 1132s The components that are necessary for creating a cloud environment are installed and ready for use. Applying IBM Cloud Manager with OpenStack 4.3 fix packs While writing this book, we downloaded the latest fix pack (Fix Pack 3) to update Chef cookbooks and other resources that are stored on our controller server. Any necessary fix packs can be downloaded from IBM Fix Central: http://www.ibm.com/support/fixcentral/ After the download, the fix packs need to be stored locally in your controller system. The extraction of a fix pack is shown in Example B-11. Example B-11 Extracting the fix pack [root@controller ICM43]# tar -zxvf cmwo_fixpack_4.3.0.3.tar.gz
  • 116.
    102 Getting Startedwith KVM for IBM z Systems After the fix pack is extracted from its compressed format, recheck the IBM Cloud Manager with OpenStack 4.3 installation logs to ensure that there are no errors reported during installation. Upon confirmation, apply the patch as shown in Example B-12. Example B-12 Applying the fix pack [root@controller ICM43]# ./install_cmwo_fixpack.sh 09/23/2015 10:49:00 AM Starting installation of fix pack for IBM Cloud Manager with OpenStack 4.3. 09/23/2015 10:49:00 AM Installed version is 4.3.0.0-20150514-1836 . 09/23/2015 10:49:00 AM Fix pack is 4.3.0.3 F20150909-2056. 09/23/2015 10:49:01 AM Copying product files... 09/23/2015 10:51:28 AM Copy successful. 09/23/2015 10:51:28 AM Running post-install fix pack scripts... 09/23/2015 10:56:35 AM Post-install scripts completed successfully. 09/23/2015 10:56:35 AM IBM Cloud Manager with OpenStack fix pack installed successfully. Fix pack install logs archived as /opt/ibm/cmwo/version/install_cmwo_fixpack_2015-09-23_10_56_35_logs.zip. [root@controller ICM43]# After the fix pack is applied to the IBM Cloud Manager with OpenStack installation, we suggest verifying that the fix pack logs do not include errors or warning messages.
  • 117.
    © Copyright IBMCorp. 2015. All rights reserved. 103 Appendix C. Basic setup and use of zHPM This appendix describes our first steps with the IBM z Systems Hypervisor Performance Manager (zHPM), which is used to bring a goal-oriented approach to the performance management of a hypervisor. Example C-1 shows the commands for enabling and starting zHPM. Example C-1 Enabling and starting zhpmd [root@itsokvm1 ~]# systemctl enable zhpmd ln -s '/usr/lib/systemd/system/zhpmd.service' '/etc/systemd/system/multi-user.target.wants/zhpmd.service' [root@itsokvm1 ~]# systemctl start zhpmd In our environment, we used the root user ID, which already has all authorities needed to manage zHPM. Example C-2 shows how to add a non-root user ID to the appropriate groups so that the user ID is authorized to use zHPM. Example C-2 Allowing root to manage zHPM [root@itsokvm1 ~]# usermod -a -G zhpmuser,zhpmadm non-root Example C-3 shows that, by default, CPU management was not enabled. It also shows how it was enabled. Example C-3 Enabling zHPM CPU management [root@itsokvm1 ~]# zhpm config --insecure zHPM CPU Management is off [root@itsokvm1 ~]# zhpm config --cpu-mgmt on --insecure zHPM CPU Management is on C Terminology: The term virtual server is used throughout this appendix and is equivalent to a virtual machine.
  • 118.
    104 Getting Startedwith KVM for IBM z Systems Then, we created a new workload resource group named darling. Example C-4 shows how it was created and lists all of the defined workload resource groups and a default definition of the new group, together with a default policy and service class. Example C-4 Creating and displaying a workload resource group [root@itsokvm1 ~]# zhpm wrg-create --wrg-name darling --insecure Created new workload resource group: b7f5fead-20d1-4edf-9386-d6f8b8332b54 [root@itsokvm1 ~]# zhpm wrg-display --insecure Wrg-Id Wrg-Name BI #VS ------------------------------------ -------------------------------- ------ --- b28ccaf1-ee6d-4bd2-86a4-4eb5e51f3db6 zHPMDefaultWorkloadResourceGroup medium 7 b7f5fead-20d1-4edf-9386-d6f8b8332b54 darling medium 0 [root@itsokvm1 ~]# zhpm wrg-display --wrg-name darling --insecure --json { "workload-resource-groups": [ { "wrg-info": { "resource-uri": "/zhpm/wsapi/v1/workload-resource-groups/b7f5fea d-20d1-4edf-9386-d6f8b8332b54", "resource-id": "b7f5fead-20d1-4edf-9386-d6f8b8332b54", "name": "darling", "description": "" }, "performance-policy": { "perf-policy-info": { "name": "zHPMDefaultPerformancePolicy", "description": "zHPM Generated Default Performance Policy", "last-modified-date": 1444991826275, "last-modified-by": "root", "business-importance": "medium" }, "service-classes": [ { "name": "zHPMDefaultServiceClass", "description": "zHPM generated default service class", "business-importance": "medium", "velocity-goal": "moderate", "cpu-critical": false, "virtual-server-name-filters": [ ".*" ] } ] }, "virtual-servers": [] } ] }
  • 119.
    Appendix C. Basicsetup and use of zHPM 105 We added a virtual machine, linux80, to the darling workload resource group. Example C-5 shows how we added the virtual server and displayed information about all workload resource groups. The darling group now contains one virtual server. Because all virtual servers have the same goals, there are no dynamic resource adjustments reported by running the ra-display command. Example C-5 Adding a virtual machine and displaying information [root@itsokvm1 ~]# zhpm --insecure vs-wrg-add --vs-name linux80 --wrg-name darli ng Successfully associated workload resource group to virtual server [root@itsokvm1 ~]# zhpm wrg-display --insecure Wrg-Id Wrg-Name BI #VS ------------------------------------ -------------------------------- ------ --- b28ccaf1-ee6d-4bd2-86a4-4eb5e51f3db6 zHPMDefaultWorkloadResourceGroup medium 6 b7f5fead-20d1-4edf-9386-d6f8b8332b54 darling medium 1 [root@itsokvm1 ~]# zhpm ra-display --insecure No dynamic resource adjustments have occurred over duration (60min) zHPM CPU Management is on We created new policy and service class definitions and updated the darling workload resource group with this information, as shown in Example C-6. Virtual servers managed by this policy have higher velocity goals and higher importance than the default virtual servers managed by the default policy. Example C-6 Updating policy and displaying information [root@itsokvm1 ~]# cat darling.pol { "performance-policy": { "perf-policy-info": { "name": "Darling", "description": "Policy for darling workload", "business-importance": "high" }, "service-classes": [ { "name": "ServiceClass1", "description": "service class", "business-importance": "high", "velocity-goal": "fast", "cpu-critical": false, "virtual-server-name-filters": [".*"] }] } } [root@itsokvm1 ~]# zhpm --insecure wrg-update --wrg-name darling --perf-policy darling.pol Successfully set performance policy for workload resource group: b7f5fead-20d1-4edf-9386-d6f8b8332b54 [root@itsokvm1 ~]# zhpm wrg-display --wrg-name darling --insecure Wrg-Id Wrg-Name BI #VS ------------------------------------ -------- ------- --- b7f5fead-20d1-4edf-9386-d6f8b8332b54 darling high 1
  • 120.
    106 Getting Startedwith KVM for IBM z Systems [root@itsokvm1 ~]# zhpm wrg-display --insecure Wrg-Id Wrg-Name BI #VS ------------------------------------ -------------------------------- ------- --- b28ccaf1-ee6d-4bd2-86a4-4eb5e51f3db6 zHPMDefaultWorkloadResourceGroup medium 6 b7f5fead-20d1-4edf-9386-d6f8b8332b54 darling high 1 Because the linux80 virtual machine now has more demanding than competing virtual servers do, we see dynamic resource adjustments in the ra-display output, as shown in Example C-7. Virtual servers with less important goals are CPU donors for linux80. Example C-7 Displaying dynamic resource adjustments [root@itsokvm1 ~]# zhpm ra-display --insecure Adj-Time Type CPU-SB CPU-SA Vs-Name Wrg-Name ------------------- -------- ------ ------ ----------------- -------------------------------- 2015-10-16 07:12:56 receiver 1024 1084 linux80 darling donor 1024 1012 linux84 zHPMDefaultWorkloadResourceGroup donor 1024 1012 instance-00000003 zHPMDefaultWorkloadResourceGroup donor 1024 1012 linux83 zHPMDefaultWorkloadResourceGroup donor 1024 1012 linux85 zHPMDefaultWorkloadResourceGroup donor 1024 1012 linux82 zHPMDefaultWorkloadResourceGroup 2015-10-16 07:13:56 receiver 1084 1154 linux80 darling donor 1012 998 linux84 zHPMDefaultWorkloadResourceGroup donor 1012 998 instance-00000003 zHPMDefaultWorkloadResourceGroup donor 1012 998 linux83 zHPMDefaultWorkloadResourceGroup donor 1012 998 linux85 zHPMDefaultWorkloadResourceGroup donor 1012 998 linux82 zHPMDefaultWorkloadResourceGroup Adj-Time Reason R-Vs-Name R-Wrg-Name -------- ------ --------- ---------- No failed dynamic resource adjustments have occurred over duration (60min) After a while, there were no more adjustments because the PI of the service class associated with the receiver virtual machine achieved its goal. We decided to redefine the darling workload resource group with even stricter goals, as shown in Example C-8. Example C-8 Updating policy and displaying information [root@itsokvm1 ~]# cat darling.pol { "performance-policy": { "perf-policy-info": { "name": "Darling", "description": "Policy for darling workload", "business-importance": "highest" }, "service-classes": [ { "name": "ServiceClass1", "description": "service class", "business-importance": "highest", "velocity-goal": "fastest", "cpu-critical": true, "virtual-server-name-filters": [".*"] }] }
  • 121.
    Appendix C. Basicsetup and use of zHPM 107 } [root@itsokvm1 ~]# zhpm --insecure wrg-update --wrg-name darling --perf-policy darling.pol Successfully set performance policy for workload resource group: b7f5fead-20d1-4edf-9386-d6f8b8332b54 [root@itsokvm1 ~]# zhpm wrg-display --wrg-name darling --insecure Wrg-Id Wrg-Name BI #VS ------------------------------------ -------- ------- --- b7f5fead-20d1-4edf-9386-d6f8b8332b54 darling highest 1 [root@itsokvm1 ~]# zhpm wrg-display --insecure Wrg-Id Wrg-Name BI #VS ------------------------------------ -------------------------------- ------- --- b28ccaf1-ee6d-4bd2-86a4-4eb5e51f3db6 zHPMDefaultWorkloadResourceGroup medium 6 b7f5fead-20d1-4edf-9386-d6f8b8332b54 darling highest 1 Example C-9 shows that linux80 was able to receive another set of resources from other virtual servers to satisfy its more demanding goal. Example C-9 Updating policy and displaying information [root@itsokvm1 ~]# zhpm ra-display --insecure Adj-Time Type CPU-SB CPU-SA Vs-Name Wrg-Name ------------------- -------- ------ ------ ----------------- -------------------------------- 2015-10-16 07:12:56 receiver 1024 1084 linux80 darling donor 1024 1012 linux84 zHPMDefaultWorkloadResourceGroup donor 1024 1012 instance-00000003 zHPMDefaultWorkloadResourceGroup donor 1024 1012 linux83 zHPMDefaultWorkloadResourceGroup donor 1024 1012 linux85 zHPMDefaultWorkloadResourceGroup donor 1024 1012 linux82 zHPMDefaultWorkloadResourceGroup 2015-10-16 07:13:56 receiver 1084 1154 linux80 darling donor 1012 998 linux84 zHPMDefaultWorkloadResourceGroup donor 1012 998 instance-00000003 zHPMDefaultWorkloadResourceGroup donor 1012 998 linux83 zHPMDefaultWorkloadResourceGroup donor 1012 998 linux85 zHPMDefaultWorkloadResourceGroup donor 1012 998 linux82 zHPMDefaultWorkloadResourceGroup 2015-10-16 07:19:41 receiver 1154 1224 linux80 darling donor 998 984 linux84 zHPMDefaultWorkloadResourceGroup donor 998 984 instance-00000003 zHPMDefaultWorkloadResourceGroup donor 998 984 linux83 zHPMDefaultWorkloadResourceGroup donor 998 984 linux85 zHPMDefaultWorkloadResourceGroup donor 998 984 linux82 zHPMDefaultWorkloadResourceGroup Adj-Time Reason R-Vs-Name R-Wrg-Name -------- ------ --------- ---------- No failed dynamic resource adjustments have occurred over duration (60min)
  • 122.
    108 Getting Startedwith KVM for IBM z Systems
  • 123.
  • 126.
    ibm.com/redbooks Printed in U.S.A. Backcover ISBN 0738441201 SG24-8332-00