This document discusses using Ansible to automate benchmarking of OpenStack clouds. It describes using OpenStack-Ansible to deploy OpenStack, Ansible roles to run benchmarks, and parsing tasks to analyze benchmark results. Benchmarking tasks test performance using tools like Passmark, stress, mprime, and reboot tests. Results are fetched to a control host and system logs are collected. The experience of using Ansible for automation and repeatable benchmarking is positive due to its ease of use, flexibility, and ability to reduce manual work. Future enhancements could expand benchmark coverage and contribute to community benchmarking projects.
2. 2
Flex
|
Ciii
A
Global
Server
and
Rack
Integration
Business
Deploy
Node,
rack
&
multi-‐rack
level
solutions
with
native
hardware
only,
native
&
third
party
hardware
for
hybrid
&
JDM
solutions
OTD
to
Commit,
BTO/
CTO
Systems
Integration
Consistently
at
99%
Universal Test
System
(UTS)
Automated
logging,
analysis
&
reporting
Lead
Time
to
Customer
To
request:
1
– 4
Days
Capability
Order
Acknowledgement
Time
4
Hours
SMI
with
Suppliers
Consistently
at
>65%
of
spend
Cork, Ireland
Brazil
Guadalajara, MX
Milpitas, CA
Austin, TX
Columbia, SC
Penang, Malaysia
Zhuhai, China
Singapore
Hungary, Campus
India
3. 3
Flex
|
Ciii
Types
of
Product
Engagements
• Rack
level
Interoperability
• Cable
management
and
Diagrams
• Functional
cabinet
to
fully
tested
cluster
assembly
&
test
• Customer/Manufacture/pr
ovision
of
all
fibre
optic,
data
&
power
cables
• System/cluster
level
stress
testing,
including
R/W
activity,
latency
measurements,
thermal
performance,
etc.
• Component
level
design
• Leverage
existing
PCBA
• Design
new
mid-‐plane
• Design
new
risers
• Custom
Bezel
design
• System
level
validation
• Thermal
/
EMC
/
Safety
• Leverage
Industry
Standard
offerings
• Solutions
Design
&
Engineering
• Define
Best
of
Breed,
Price
for
Performance
&
Life
cycle
management
• Leverage
existing
system
level
validation*
• OEM
Branding
and
Packaging
COTS
Integration
Cloud and Data Center
“Light” Customization
“Heavy” Customization
• COT’s
Integration
+
• System
Modifications
to
OEM
needs,
cooling,
application
drive,
carriers
• Non
AVL
component
selections
• BIOS
defaults
change
• BMC
defaults
change
Complexity
Time to Market
5. 5
Why
use
Ansible
as
the
framework
foundation?
Ansible
• Agent-‐less
• OpenStack
Support
• Network
Switch
Support
• Real-‐time
Job
Events
• Community
Ansible
Tower
• Role-‐based
Access
• Job
Scheduling
• Credentials
Management
• SCM
Support
• Inventory
Management
• Surveys
6. 6
Server
Provisioning
with
Stacki
ü Open
Source
ü Simplified
Baremetal
Provisioning
ü Mac
address
targeted
PXE
Booting
ü Customizable
Local
CentOS
and
Ubuntu
Repositories
ü Hardware
&
Software
RAID
Support
ü NIC
Bonding
ü Post-‐Install
Configuration
ü http://www.stacki.com/
7. 7
OpenStack
-‐ Ansible
• Using
Ansible
to
deploy
OpenStack
• First
released
in
April
2015
(Kilo)
• Currently
on
Mitaka (Stable)
and
Newton
(Master
branch)
• OpenStack-‐Ansible
deployment
steps:
9. 9
OpenStack-‐Ansible
Deployment
Process
• Deployment
Host
• Executes
the
deployment
playbooks
via
Ansible
• Holds
all
the
necessary
deployment
configuration
files
• Can
also
be
a
Target
Host!
• Target
Hosts
• All
the
OpenStack
cluster
nodes
(Controllers,
Compute,
and
Storage)
• Needs
proper
network
configuration
and
dependency
packages
• Deployment
Configuration
• Define
the
proper
network
bridges
and
container
networking
variables.
(VLAN,
VXLAN,
FLAT,
etc…)
• Define
node
roles.
• Deployment
• Run
the
playbooks:
setup-‐hosts,
setup-‐infrastructure,
and
setup-‐openstack.
17. 17
The
Ansible
Experience
• The
“Good”
• Easy
to
train
other
members
of
the
team
on
usage
• Intuitive
playbook
structure
• Flexibility
in
custom
modules
and
plugins
• Repeatable
Benchmarking
• Cuts
down
manual
work!
• The
“Kinda Bad”…
but
not
really
• Error
messages
are
sometimes
obscure
• To
OpenSSH or
to
Paramiko?
• The
“Could
Be
Better”
• Active/Active
HA
for
Tower
(On
the
Roadmap!)
• Canceled
Jobs
!=
Failed
Jobs
18. 18
Looking
Ahead
• Expand
Benchmark
Coverage
• Application
Benchmarks
• Database
Benchmarks
• Hadoop
Deployments
• Container
Deployments
• Full
Stack
Validation
Suite
with
Stacki
and
Ansible
Openstack
• One
playbook
to
validate
them
all
• Community
Contribution
• Support
OpenStack
Benchmarking
Projects
(i.e Browbeat)
20. 20
Unixbench
– System
Benchmark
• Released
in
1995
to
measure
performance
of
an
entire
system
• Sensitive
to
HW
as
well
as
SW
configuration
and
OS
version
• Sub-‐test
scores
compared
against
Sun
SparcStation 2061
(1994)
to
create
sub-‐test
index
• Sub-‐test
indices
combined
(geo-‐mean)
to
form
System
Index
Sub-‐test Name Description System Component
Dhrystone Synthetic
integer test
(1984) CPU
Whetstone Synthetic
floating-‐point
test
(1972) CPU
execl() Count
of
system
exec
calls
in
1 second OS
File
Copy Writes file
to
disk
then
makes
a
copy Disk
I/O
Pipe Throughput Measuring 512B
transfers/second Memory,
OS/Shell
Pipe-‐based
Context
Switching “2-‐way
conversation”
with increasing
integer Memory,
OS/Shell
Shell Scripts Shell
script
execution
time OS/Shell
Sys
call Latency
induced by
entering
OS OS