Openstack Messaging
with ZMQ
(Distributed RPC)

By Yatin Kumbhare
yatinkumbhare@gmail.com
ZMQ / 0MQ / ZeroMQ -Introduction
●

Zero Broker, Zero latency, zero cost - culture of minimalism.

●

Brokerless - No SPOF

●

High Scalability

●

Flexible: inproc, ipc, tcp

●

Order of bind and connect - Doesn’t matter

●

Fast and reliable
REQ-REP
# Client
import zmq
context = zmq.Context()
print("Connecting to hello world server…")
socket = context.socket(zmq.REQ)
socket.connect("tcp://localhost:5555")
for request in range(10):
print("Sending request %s …" % request)
socket.send("Hello")
#

Get the reply.

message = socket.recv()
print("Received reply %s [ %s ]" %
(request, message))

# Server
import zmq
context = zmq.Context()
# Socket to talk to server
print("Connecting to hello world server…")
socket = context.socket(zmq.REP)
socket.bind("tcp://*:5555")
while True:
print("waiting for client request")
msg = socket.recv()
print("Message received %s “ % msg)
message = socket.send("World")
PUB-SUB
import zmq
from random import randint
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.bind("tcp://*:5556")

import zmq
import sys
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://localhost:5556")

while True:
zipcode = randint(1, 10)
temperature = randint(-20, 50)

zip_filter = sys.argv[1] if len(sys.argv) > 1
else “”

humidity = randint(10, 15)
print “Publish data:”,(zipcode,
temperature, humidity)
socket.send("%d %d %d" % (zipcode,
temperature, relhumidity))

print “Collectin updates from weather service
%s” % zip_filter
socket.setsockopt(zmq.SUBSCRIBE, zip_filter)
#process 5 records
for record in range(5):
data = socket.recv()
print data
PUB-SUB
PUSH-PULL
# Ventilator
# socket to send messages on
sender = context.socket(zmq.PUSH)
sender.bind( “tcp://*:5557” )
sink = context.socket(zmq.PUSH)
sink.connect( “tcp://localhost:5558” )
sink.send( “0”)
# Workers:
receiver = context .socket(zmq .PULL)
receiver .connect( "tcp://localhost:5557" )
# Socket to send messages to to sink
sender = context .socket(zmq .PUSH)
sender.connect( "tcp://localhost:5558" )
# Sink:
# Socket to receive messages on
receiver = context .socket(zmq .PULL)
receiver .bind("tcp://*:5558" )
# Wait for start of batch
s = receiver .recv()
Single-Point-Failure
rpc.cast
rpc.call
ZMQ for Openstack
●

In openstack, zeromq makes use of two protocols, TCP and IPC.

●

TCP Socket, for sending message from nova services to rpc-zmq-receiver service.

●

IPC Socket, for forwarding message received by rpc-zmq-receiver to the local nova services, which is listening
on IPC socket.

●

ZMQ uses PUB-SUB and PUSH-PULL socket types.

●

PUB-SUB - to get reply, in case of rpc.call, this work as direct-consumer/publisher in context of rabbitmq.

●

PUSH-PULL - for sending messages to other services, for example rpc.cast.
zmq-receiver service
●

rpc-zmq-receiver service with different sockets.

●

Instantiate ZmqProxy class from oslo.messaging, impl_zmq.
py file.

●

Service creates TCP://*:9501, PULL socket, and listen for
messages from other services.

●

Topic: fanout~service or zmq_replies.<hostname>, zmq.PUB
socket

●

Topic: service.<hostname>, (conductor.node) zmq.PUSH socket
is created.

●

The socket protocol will be IPC.

●

Here, data is received on TCP://*:9501 and forwarded to IPC
socket.
Nova-* services
●

Instantiate ZmqReactor from oslo.messaging, impl_zmq.py
file.

●

Mostly the nova service manages with zmq.PUSH, zmq.PULL
and zmq.SUB socket type.

●

For consuming messages nova service uses IPC sockets with
zmq.PULL, and to cast a message it creates TCP, zmq.PUSH
socket.

●

example: when zmq-receiver, publish message on ipc:
///var/run/openstack/zmq_topic_conductor.node, with PUSH
socket, which is consumed by ipc:
///var/run/openstack/zmq_topic_conductor.node with PULL
socket at conductor service.
Nova-compute
●

topic: fanout~compute, bind to ipc:// , zmq.SUB socket

●

topic: compute.<hostname>, bind to ipc:// , zmq.PULL

●

nova compute does rpc.call on conductor at the start of the
service.

●

The hostname for conductor service is read out from
MatchMakerRing file.

●

compute service, creates tcp://<conductor-hostname>:9501,
PUSH socket and send message, which is received by zmqreceiver, and forwarded to nova-conductor service.

●

As, this is rpc.call, to receive reply, nova compute create, ipc:
///var/run/openstack/zmq_topic_zmq_replies.<hostname>
with SUB socket type and subscribe to msg-id (unique uuid),
and await for response from conductor.
MatchMakerRing file
{
"conductor": [
"controller"
],
"scheduler": [
"controller"
],
"compute": [
"controller","computenode"
],
"network": [
"controller","computenode"
],
"zmq_replies": [
"controller","computenode"
],

"cert": [
"controller"
],
"cinderscheduler": [
"controller"
],
"cinder-volume": [
"controller"
],
"consoleauth": [
"controller"
]
}
Message routing
compute make rpc.call to conductor.
Sends data on tcp://gravity:9501 with PUSH, which is received
by zmq-receiver service of gravity host with topic:conductor.
gravity

'conductor' {'args': {'service': {u'binary':
u'nova-compute',
u'created_at': u'2014-0219T11:02:32.000000',
u'deleted': 0,

before rpc.call, include following data
●
reply-topic as zmq_replies.nodex

u'deleted_at': None,

●

create unique msg_id,

u'disabled': False,

●

extra key "method":"-reply"

u'disabled_reason': None,
u'host': u'nodex',

To receive reply on topic:zmq_replies.nodex, creates socket
ipc://...zmq_topic_zmq_replies.nodex with SUB and subscribe to
msg_id.

u'id': 2,
u'report_count': 22,
u'topic': u'compute',

The conductor consume data and takes out reply-topic as
zmq_replies.nodex
Conductor sends back the response by creating tcp://nodex:
9501 with PUSH and send data.

u'updated_at': u'2014-0219T11:08:10.000000'},
'values': {'report_count': 23}},
'method': 'service_update',
'namespace': None,
'version': '1.34'}
Message routing
●

nodex zmq-receiver service, receiver data and
check for topic, based on topic which is

u'zmq_replies.nodex' {'args': {' msg_id':
u'103b95cc64ab4e39924b6240a8dbaac8',
'response': [[{'binary': u'nova-compute',

zmq_replies.nodex, create ipc://....
/zmq_topic_zmq_replies.nodex with PUB and

'created_at': '2014-0219T11:02:32.000000',

send data.
●

'deleted': 0L,

Now, the compute already has reply socket ipc://

'deleted_at': None,

with SUB, and is already subscribe to msg_id,

'disabled': False,

which will get back the response of rpc.call.

'disabled_reason': None,
'host': u'nodex',
'id': 2L,
'report_count': 23,
'topic': u'compute',
'updated_at': '2014-0219T11:08:33.572453'}]]},
'method': '-process_reply'}
Devstack with zmq
#localrc
●

ENABLED_SERVICES+=,-rabbit,-qpid,zeromq

●

#controller

ENABLED_SERVICES=n-cpu,n-net,zeromq

#compute node

Apply Patch: https://review.openstack.org/#/c/59875/3

Due to an issue with zeromq support for notifications in Glance, glance-api fails to start when configured to use
ZeroMQ.
Commnet line in lib/glance in devstack repository
#iniset_rpc_backend glance $GLANCE_API_CONF DEFAULT
Zmq in context of openstack

Zmq in context of openstack

  • 1.
    Openstack Messaging with ZMQ (DistributedRPC) By Yatin Kumbhare yatinkumbhare@gmail.com
  • 2.
    ZMQ / 0MQ/ ZeroMQ -Introduction ● Zero Broker, Zero latency, zero cost - culture of minimalism. ● Brokerless - No SPOF ● High Scalability ● Flexible: inproc, ipc, tcp ● Order of bind and connect - Doesn’t matter ● Fast and reliable
  • 3.
    REQ-REP # Client import zmq context= zmq.Context() print("Connecting to hello world server…") socket = context.socket(zmq.REQ) socket.connect("tcp://localhost:5555") for request in range(10): print("Sending request %s …" % request) socket.send("Hello") # Get the reply. message = socket.recv() print("Received reply %s [ %s ]" % (request, message)) # Server import zmq context = zmq.Context() # Socket to talk to server print("Connecting to hello world server…") socket = context.socket(zmq.REP) socket.bind("tcp://*:5555") while True: print("waiting for client request") msg = socket.recv() print("Message received %s “ % msg) message = socket.send("World")
  • 4.
    PUB-SUB import zmq from randomimport randint context = zmq.Context() socket = context.socket(zmq.PUB) socket.bind("tcp://*:5556") import zmq import sys context = zmq.Context() socket = context.socket(zmq.SUB) socket.connect("tcp://localhost:5556") while True: zipcode = randint(1, 10) temperature = randint(-20, 50) zip_filter = sys.argv[1] if len(sys.argv) > 1 else “” humidity = randint(10, 15) print “Publish data:”,(zipcode, temperature, humidity) socket.send("%d %d %d" % (zipcode, temperature, relhumidity)) print “Collectin updates from weather service %s” % zip_filter socket.setsockopt(zmq.SUBSCRIBE, zip_filter) #process 5 records for record in range(5): data = socket.recv() print data
  • 5.
  • 6.
    PUSH-PULL # Ventilator # socketto send messages on sender = context.socket(zmq.PUSH) sender.bind( “tcp://*:5557” ) sink = context.socket(zmq.PUSH) sink.connect( “tcp://localhost:5558” ) sink.send( “0”) # Workers: receiver = context .socket(zmq .PULL) receiver .connect( "tcp://localhost:5557" ) # Socket to send messages to to sink sender = context .socket(zmq .PUSH) sender.connect( "tcp://localhost:5558" ) # Sink: # Socket to receive messages on receiver = context .socket(zmq .PULL) receiver .bind("tcp://*:5558" ) # Wait for start of batch s = receiver .recv()
  • 7.
  • 8.
  • 9.
  • 10.
    ZMQ for Openstack ● Inopenstack, zeromq makes use of two protocols, TCP and IPC. ● TCP Socket, for sending message from nova services to rpc-zmq-receiver service. ● IPC Socket, for forwarding message received by rpc-zmq-receiver to the local nova services, which is listening on IPC socket. ● ZMQ uses PUB-SUB and PUSH-PULL socket types. ● PUB-SUB - to get reply, in case of rpc.call, this work as direct-consumer/publisher in context of rabbitmq. ● PUSH-PULL - for sending messages to other services, for example rpc.cast.
  • 11.
    zmq-receiver service ● rpc-zmq-receiver servicewith different sockets. ● Instantiate ZmqProxy class from oslo.messaging, impl_zmq. py file. ● Service creates TCP://*:9501, PULL socket, and listen for messages from other services. ● Topic: fanout~service or zmq_replies.<hostname>, zmq.PUB socket ● Topic: service.<hostname>, (conductor.node) zmq.PUSH socket is created. ● The socket protocol will be IPC. ● Here, data is received on TCP://*:9501 and forwarded to IPC socket.
  • 12.
    Nova-* services ● Instantiate ZmqReactorfrom oslo.messaging, impl_zmq.py file. ● Mostly the nova service manages with zmq.PUSH, zmq.PULL and zmq.SUB socket type. ● For consuming messages nova service uses IPC sockets with zmq.PULL, and to cast a message it creates TCP, zmq.PUSH socket. ● example: when zmq-receiver, publish message on ipc: ///var/run/openstack/zmq_topic_conductor.node, with PUSH socket, which is consumed by ipc: ///var/run/openstack/zmq_topic_conductor.node with PULL socket at conductor service.
  • 13.
    Nova-compute ● topic: fanout~compute, bindto ipc:// , zmq.SUB socket ● topic: compute.<hostname>, bind to ipc:// , zmq.PULL ● nova compute does rpc.call on conductor at the start of the service. ● The hostname for conductor service is read out from MatchMakerRing file. ● compute service, creates tcp://<conductor-hostname>:9501, PUSH socket and send message, which is received by zmqreceiver, and forwarded to nova-conductor service. ● As, this is rpc.call, to receive reply, nova compute create, ipc: ///var/run/openstack/zmq_topic_zmq_replies.<hostname> with SUB socket type and subscribe to msg-id (unique uuid), and await for response from conductor.
  • 14.
    MatchMakerRing file { "conductor": [ "controller" ], "scheduler":[ "controller" ], "compute": [ "controller","computenode" ], "network": [ "controller","computenode" ], "zmq_replies": [ "controller","computenode" ], "cert": [ "controller" ], "cinderscheduler": [ "controller" ], "cinder-volume": [ "controller" ], "consoleauth": [ "controller" ] }
  • 15.
    Message routing compute makerpc.call to conductor. Sends data on tcp://gravity:9501 with PUSH, which is received by zmq-receiver service of gravity host with topic:conductor. gravity 'conductor' {'args': {'service': {u'binary': u'nova-compute', u'created_at': u'2014-0219T11:02:32.000000', u'deleted': 0, before rpc.call, include following data ● reply-topic as zmq_replies.nodex u'deleted_at': None, ● create unique msg_id, u'disabled': False, ● extra key "method":"-reply" u'disabled_reason': None, u'host': u'nodex', To receive reply on topic:zmq_replies.nodex, creates socket ipc://...zmq_topic_zmq_replies.nodex with SUB and subscribe to msg_id. u'id': 2, u'report_count': 22, u'topic': u'compute', The conductor consume data and takes out reply-topic as zmq_replies.nodex Conductor sends back the response by creating tcp://nodex: 9501 with PUSH and send data. u'updated_at': u'2014-0219T11:08:10.000000'}, 'values': {'report_count': 23}}, 'method': 'service_update', 'namespace': None, 'version': '1.34'}
  • 16.
    Message routing ● nodex zmq-receiverservice, receiver data and check for topic, based on topic which is u'zmq_replies.nodex' {'args': {' msg_id': u'103b95cc64ab4e39924b6240a8dbaac8', 'response': [[{'binary': u'nova-compute', zmq_replies.nodex, create ipc://.... /zmq_topic_zmq_replies.nodex with PUB and 'created_at': '2014-0219T11:02:32.000000', send data. ● 'deleted': 0L, Now, the compute already has reply socket ipc:// 'deleted_at': None, with SUB, and is already subscribe to msg_id, 'disabled': False, which will get back the response of rpc.call. 'disabled_reason': None, 'host': u'nodex', 'id': 2L, 'report_count': 23, 'topic': u'compute', 'updated_at': '2014-0219T11:08:33.572453'}]]}, 'method': '-process_reply'}
  • 17.
    Devstack with zmq #localrc ● ENABLED_SERVICES+=,-rabbit,-qpid,zeromq ● #controller ENABLED_SERVICES=n-cpu,n-net,zeromq #computenode Apply Patch: https://review.openstack.org/#/c/59875/3 Due to an issue with zeromq support for notifications in Glance, glance-api fails to start when configured to use ZeroMQ. Commnet line in lib/glance in devstack repository #iniset_rpc_backend glance $GLANCE_API_CONF DEFAULT