Zmq drivers overview
Current state and further development
By Oleksii Zamiatin (ozamiatin)
ozamiatin@mirantis.com
Software engineer at Mirantis
High-level (old driver)
Here we have 2 typical
OS nodes driven by zmq
Zmq Proxy
Zmq Proxy
● Is needed to have a
single TCP port
assigned with ZeroMQ
driver
● Is running on each node
● Performs redirection
from TCP socket to local
IPC
Client-ZmqProxy-Server
In terms of Zmq-driver each
service is an RPC-client
(which talks) or RPC-server
(which listens and replies) or
both.
Each node consists of Zmq
proxy and unlimited number
of clients and servers.
CALL path (old driver)
Here is a typical path of a
CALL message with reply
The blue path also represents
direct CAST (no reply, no
fanout)
CALL path localhost (old driver)
The same for localhost
The difference: we have a
single proxy here
CALL path (old driver)
How many sockets do we
open for 1 CALL?
We open 6 sockets per each
call (2 sockets stay on
proxies)
3 CALLs will have 2 + 18 = 20
sockets
10 CALLs = 2 + 60 = 62
sockets.
Sockets cache is going to fix
the problem of growth, but we
can initially allocate less
sockets
CALL path (new driver)
What was proposed to
change:
Simplify CALL socket pipeline
with REQ/REP pattern
Here we have only REQ +
REP 2 sockets per each call
(and 2 on proxy).
10 CALLs = 2 + 20 = 22
sockets compared to 62 on
previous slide.
CAST (new driver)
We use DEALER instead of
REQ for CAST to stay async
(not waiting for reply)
Fanout (old driver)
Fanout is a broad-cast over all
available nodes
It becomes tricky to detect
nodes listening to a specific
topic
To resolve this we have so-
called “Matchmaker” object
which has to return a list of
hosts by topic client-side
Matchmaker redis (old driver)
One of the good matchmaker
implementations includes
Redis storage which is synced
on its own
Server which starts to listen
some topic puts the topic and
its host IP to redis and this
info becomes available on all
nodes
When client starts fanout cast
it receives list of hosts from
matchmaker
Fanout (new driver)
Solution with Redis is also
applicable in new driver with
REQ/REP ROUTER/DEALER
But we also need to research
PUB/SUB + topic subscription
filter for fanout implementation
It includes XSUB/XPUB on
proxy and one additional port
(9502) is allocated for zmq
driver, because 9501 is
already in use by REQ/REP
Notifier (new driver)
Ceilometer is a service which
relies on this pattern
The best implementation of
notifier is PUB/SUB
With notifier it becomes easy
to collect some logging from
all over the cloud to a listener
node
New driver what and why?
The main reason to write a new driver instead of rewriting existing
one is a different underlying concept not just a refactoring reason
The old driver was designed as a universal socket pipeline
(PUSH/PULL forward and PUB/SUB backwards) to build all
messaging patterns on top of it
But such approach is not an effective one especially with ZeroMQ
New driver patterns usage
In the new driver we would
like to map ZeroMQ patterns
to oslo.messaging ones
directly, not reinventing or
building them from more
primitive ones
Please pay attention that all
socket pipelines are not
depending on each other and
may be run in separate
threads or even processes
We are going to have different
proxy for each socket pipeline
■ REQ/REP for CALL where we need reply waiting. It also may
be used for direct CAST but not necessarily so.
■ PUB/SUB for CAST+FANOUT and Notifier
■ We also can implement direct CAST with PUSH/PULL why
not?
What is even more important to have
Advanced diagnostics and testing
● Unit/Functional/Integration tests
● Logging
○ Inline
○ Centralized (using notifier) per node or on a specific node
● Interactive console client to check node’s state
Detailed documentation with graphics (not only
a text)
Thanks for attention!

Oslo.Messaging new 0mq driver proposal

  • 1.
    Zmq drivers overview Currentstate and further development By Oleksii Zamiatin (ozamiatin) ozamiatin@mirantis.com Software engineer at Mirantis
  • 2.
    High-level (old driver) Herewe have 2 typical OS nodes driven by zmq
  • 3.
    Zmq Proxy Zmq Proxy ●Is needed to have a single TCP port assigned with ZeroMQ driver ● Is running on each node ● Performs redirection from TCP socket to local IPC
  • 4.
    Client-ZmqProxy-Server In terms ofZmq-driver each service is an RPC-client (which talks) or RPC-server (which listens and replies) or both. Each node consists of Zmq proxy and unlimited number of clients and servers.
  • 5.
    CALL path (olddriver) Here is a typical path of a CALL message with reply The blue path also represents direct CAST (no reply, no fanout)
  • 6.
    CALL path localhost(old driver) The same for localhost The difference: we have a single proxy here
  • 7.
    CALL path (olddriver) How many sockets do we open for 1 CALL? We open 6 sockets per each call (2 sockets stay on proxies) 3 CALLs will have 2 + 18 = 20 sockets 10 CALLs = 2 + 60 = 62 sockets. Sockets cache is going to fix the problem of growth, but we can initially allocate less sockets
  • 8.
    CALL path (newdriver) What was proposed to change: Simplify CALL socket pipeline with REQ/REP pattern Here we have only REQ + REP 2 sockets per each call (and 2 on proxy). 10 CALLs = 2 + 20 = 22 sockets compared to 62 on previous slide.
  • 9.
    CAST (new driver) Weuse DEALER instead of REQ for CAST to stay async (not waiting for reply)
  • 10.
    Fanout (old driver) Fanoutis a broad-cast over all available nodes It becomes tricky to detect nodes listening to a specific topic To resolve this we have so- called “Matchmaker” object which has to return a list of hosts by topic client-side
  • 11.
    Matchmaker redis (olddriver) One of the good matchmaker implementations includes Redis storage which is synced on its own Server which starts to listen some topic puts the topic and its host IP to redis and this info becomes available on all nodes When client starts fanout cast it receives list of hosts from matchmaker
  • 12.
    Fanout (new driver) Solutionwith Redis is also applicable in new driver with REQ/REP ROUTER/DEALER But we also need to research PUB/SUB + topic subscription filter for fanout implementation It includes XSUB/XPUB on proxy and one additional port (9502) is allocated for zmq driver, because 9501 is already in use by REQ/REP
  • 13.
    Notifier (new driver) Ceilometeris a service which relies on this pattern The best implementation of notifier is PUB/SUB With notifier it becomes easy to collect some logging from all over the cloud to a listener node
  • 14.
    New driver whatand why? The main reason to write a new driver instead of rewriting existing one is a different underlying concept not just a refactoring reason The old driver was designed as a universal socket pipeline (PUSH/PULL forward and PUB/SUB backwards) to build all messaging patterns on top of it But such approach is not an effective one especially with ZeroMQ
  • 15.
    New driver patternsusage In the new driver we would like to map ZeroMQ patterns to oslo.messaging ones directly, not reinventing or building them from more primitive ones Please pay attention that all socket pipelines are not depending on each other and may be run in separate threads or even processes We are going to have different proxy for each socket pipeline ■ REQ/REP for CALL where we need reply waiting. It also may be used for direct CAST but not necessarily so. ■ PUB/SUB for CAST+FANOUT and Notifier ■ We also can implement direct CAST with PUSH/PULL why not?
  • 16.
    What is evenmore important to have Advanced diagnostics and testing ● Unit/Functional/Integration tests ● Logging ○ Inline ○ Centralized (using notifier) per node or on a specific node ● Interactive console client to check node’s state Detailed documentation with graphics (not only a text)
  • 17.