This talk provides an introduction to the OpenStack Interop Working Group, what it does, and how it works. We'll also look into some upcoming new work, such as the development of vertical programs (e.g. for clouds being built for NFV or other specific use cases).
Yes, the title is a Doctor Who quote. Get used to it, you’re going to be seeing a lot of Doctor Who in the next hour or so.
Example: the spring 2015 user survey revealed that 7% of production deployments include Sahara. So even though it’s been part of the old Integrated Release since Juno, we’re not likely to include it in a Guideline yet. In essence, most of the world doesn’t consider it a “core” part of the stack.
The Keystone v3 API made it’s first appearance in Grizzly. Neutronclient supported it in Juno. Nova still talks to the Glance v1 API today even through v3 is now being planned and v2 is current. You can’t simultaneously deploy nova-net and neutron. You can attach instances directly to an externally-routable provider network, or you can use floating IP’s for external reachability.
Perhaps inherent in that ambition is the promise of interoperability.
Code against one set of capabilities and API’s, take your pick of many
public clouds, distributions, appliances, and services.
As it turns out, even clouds that are on some level the same
can look and act very differently.
OpenStack aims to be a “ubiquitous Open Source Cloud Computing platform that will meet the
needs of public and private clouds, regardless of size…”
There are ~4,650 config options in the tc-approved-release OpenStack projects.
And ~1,073 policy.json configurations (as of late 2015).
OpenStack has a lot of nerd knobs that can
affect how a cloud behaves. In fact…
You can also put things in front of OpenStack that change the behavior users
The mechanics of what’s under the hood may change behavior too…
(take image formats supported by various virt drivers & storage platforms for example) 5
And of course different types of workloads
use the cloud in different ways…
Mark T. Voelker (@marktvoelker)
• OpenStack Architect @ VMware, Interop Working Group co-chair
• Fact: can be bribed with doughnuts
• In copious (hah!) spare time: OpenStack solutions, Big Data, Massively Scalable Data Centers, DevOps,
making sawdust with extreme prejudice, raising two great kids with my awesome wife in North Carolina
“A computer nerd….is somebody who uses a computer in order to use a computer.”
[note: this talk will be slightly more entertaining if you’re a science fiction fan…
…otherwise it will merely be somewhat informative.]
Why Should Vendors Care
• It’s good for your users
• It helps you promote your product
• It helps applications be developed
for your platform
• It’s now required if you want to
call your product OpenStack.
“”” A function to list images. Because all OpenStack Powered Platforms can do that…somehow.””"
if $cloud == ‘vendorA’:
# TODO: this also works for vendorX
elif $cloud == ‘vendorB’:
# TODO: this also worked for vendorY last week but now, um?
elif $cloud == ‘vendorC’:
# I dunno what cloud this is, but it’s OpenStack Powered! So something must work.
# Resort to trial and error since we don’t know.
# D’oh, guess that wasn’t it…
# Aww…well third time’s the charm?
This function could
also be called:
Someone should make a standard.
Interoperability is hard.
Want to prevent that?
When the community
works together with
the Interop Working
Group, we can.
To understand how,
you need to know how
OpenStack is primarily
governed by two bodies:
Board of Directors
(you decide which is which)
The Technical Committee provides:
“technical leadership for OpenStack as
a whole…..enforces OpenStack ideals
Commonality, Integration, Quality…),
decides on issues affecting multiple
“The Board of Directors provides strategic & financial oversight of Foundation resources and staff.”
Interop WG is a Board activity.
• One Director serves as co-chair, other
co-chair elected by participants.
• It’s work and procedures must ultimately
be approved by a vote of the Board, not
the +2’s of it’s most trusted reviewers.
• It produces “Guidelines”, not
• It can use the OpenStack trademark and
logo as both a carrot and stick.
• It can make requirements for products
that call themselves OpenStack.
So what’s a Guideline, then?
• A list of Capabilities that products
• A list of Tests products must pass
to prove it.
• A list of Designated Sections of
OpenStack code they must use to
provide those Capabilities.
(also: a list of exceptions & things that
might be required in the future)
What’s the cadence of Guidelines?
• New Guideline every 6 months
• Each Guideline covers 3 OpenStack releases
• Only the newest 2 can be used for new logo
Things you won’t find in
• Stuff that end users don’t
see or can’t use:
• Admin-only API’s
• RPC API’s
• DB Schema
• HA Requirements
• Stuff that’s intentionally
• Virt/net/storage drivers
• Specific databases
• Stuff that doesn’t have tests
• Stuff that’s being
on that in a minute)
How do we decide what
10 Guiding Principles
12 Core Criteria
A giant list of tests
New Capabilities must be
for one Guideline before
(Currently all weighted fairly
equally with a few small
When it’s all
send it to the
Say more to me about these tests….
• Must be under TC governance
• All tests today are in Tempest (per the TC’s request)
• Must work on all releases covered by a Guideline
• Typically run via the RefStack wrapper which reports results to
refstack.openstack.org and shows which Guidelines you meet
Sometimes things go amiss…
Tests can be “flagged” (not required
for the duration of the Guideline) in
• Capability fails to meet Criteria
(e.g. was scored incorrectly)
• Test fails/is skipped due to an
accepted bug in the project
• Test fails/is skipped due to an
accepted bug in the test
• Test fails because it requires non-
• Test reflects an implementation
choice that isn’t widely deployed
even though the Capability itself
When does all this happen?
• Summit-3 months: preliminary draft
• Summit-2 months: ID new Capabilities
• Summit-1 month: Score Capabilities
• Summit: “Solid” draft
• Summit+1 month: Self-testing
• Summit+2 months: Test Flagging
• Summit+3 months: Board Approval
(Note: 2015 was weird in that we had a very
accelerated schedule to get DefCore
bootstrapped…above is what it looks like
from now on.)
Why isn’t $my_favorite_thing in
the current Interoperability
• It didn’t meet Criteria
(scored too low)
• It wasn’t scored in time
(scoring is surprisingly hard
to get right)
• It was admin-only or driver-
• That project isn’t yet widely
• There wasn’t a test for it
• It didn’t score highly across
all releases covered in that
• Nobody brought it up yet
I’m an OpenStack
I have a really cool
How do I get it into the
I’m the Interop WG co-chair.
I have a blog post you should read!
• Document it well
• Ensure it has usable tests
• Foster adoption among users,
SDK’s, & other projects
• Be patient: needs to be in 3
Do we get to offer feedback?
• Feedback built into Interop WG
• Feedback encouraged for
• Feedback encouraged via flag
• Feedback via User Survey
• Feedback via RefStack
community-visible results (you may also buy me a beer & bend
my ear about interoperability anytime)
How do I make RefStack work for me?
It’s actually not that hard.
Instructions here which boil down to:
1. Download refstack-client.
2. Run the “setup_env” script.
3. Configure tempest for your cloud.
4. Run refstack-client to execute tests.
5. Upload results to refstack.openstack.org and review.
Why run all the tests &
Because more data is
This gives us additional data
about what works in your cloud.
With data from lots of clouds, we
can make better scoring
decisions in the future when
considering adding Capabilities
What if I don’t pass all the
• Figure out why some tests failed.
• Was it environmental (e.g. a timeout due to
storage being slow)? Tweaking tempest
config may help.
• Was it due to a bug in the test?
• Was it due to a bug in OpenStack?
• Do you have grounds for requesting a flag?
• Valid reasons for flagging a test and how to
do it can be found here.
• Talk to us!
• We have an interest in helping you succeed.
• firstname.lastname@example.org is here to help!
• Catch us on IRC at #openstack-interop or
• Work begins on 2017.08
• More testing
• Vendors currently testing against just-released 2017.01
• Vertical Programs
• Example: what would “OpenStack Powered NFV” look like?
• New add-on programs and/or “vertical” programs
• Example: what would “OpenStack Powered Compute with
Database” look like?
• Could these be a conduit to projects defining (early on, for
themselves) what interoperability looks like to users of their
Sometimes the enemy is us.
• Sometimes projects are slow
to adopt support for each
others’ new API’s and features.
• Sometimes projects provide
multiple ways to do the same
things (and sometimes they’re
• Sometimes we don’t have
good data about what’s really
• Sometimes tests use admin
credentials unnecessarily or
lump many Capabilities into
For this to work, we have to communicate as one community.
• Board of Directors/Interop
• End Users
But what interoperability issues
should we attack first?
How about the ones in
What interoperability issues
should I, an end user, watch
out for ?
Here’s that report again.
1. How we test interop
2. Variance in API evolution
3. External network connectivity
4. API and Policy discoverability
5. Understanding the Interop WG
I need some
things in my
...wish I had interop for those too.
If you use some less
probably care about
(Project devs do too.)
• Build on existing
• Let project teams
interop looks like
When projects think
about interop early,
have a way to see
Now, did you
The kernel &
run in your
than the one
you have on
(or in your router,
We still think of both
specific use cases.
OpenStack now too.
particular use case,
still useful and
might not behave
as expected when
moving from one
cloud or product to
We’re just getting started
designing vertical programs.
Much depends on the use case.
Some things a use case might
• Capabilities not commonly
exposed in general purpose
• Specific attributes in the API
• Scenario tests (beyond API
availability, maybe beyond
• Specific control plane design
• Performance criteria.
• Admin API’s
• Different OpenStack projects
The key is to take the general
methodology and criteria we’ve
...and apply them in a way that’s
specific to a vertical use case.
• Same BoD approval
• Same schema
• Same general process
• Same criteria (?)
• Builds on Powered (?)
• Different tests
• Different reporting
• Different mechanics
And of course it sometimes helps to work
with our friends in adjacent
Some examples of
things a general-
clouds might not need
for interoperability that
an NFV cloud might:
• SR-IOV support
• High PPS network
• Provider network
• NUMA topology
• CPU pinning
• Active-active control
We’d like to get
moving quickly on
for vertical use cases.
There seems to be
demand, and people
ready to help. NFV
seems like a good first
I’ll be jumping in myself.
Speaking of which, we should really wrap up today’s
talk and get on with it…
Want to learn more?
• 2017.01 Guideline
• 2016.08 Guideline
• Next Guideline draft
• Public RefStack Server
• OpenStack Interop Homepage
• Core Criteria
• Interop WG procedural overview
• Lexicon of Interop WG terms
• Interop WG wiki & meeting Info
• How to submit patches
“Do what I do. Hold on tight and pretend it’s a plan!”
[with apologies to fine the folks at the BBC’s “Doctor Who”]
(please don’t have me arrested)