3. Zmq Proxy
Zmq Proxy
● Is needed to have a
single TCP port
assigned with ZeroMQ
driver
● Is running on each node
● Performs redirection
from TCP socket to local
IPC
4. Client-ZmqProxy-Server
In terms of Zmq-driver each
service is an RPC-client
(which talks) or RPC-server
(which listens and replies) or
both.
Each node consists of Zmq
proxy and unlimited number
of clients and servers.
5. CALL path (old driver)
Here is a typical path of a
CALL message with reply
The blue path also represents
direct CAST (no reply, no
fanout)
6. CALL path localhost (old driver)
The same for localhost
The difference: we have a
single proxy here
7. CALL path (old driver)
How many sockets do we
open for 1 CALL?
We open 6 sockets per each
call (2 sockets stay on
proxies)
3 CALLs will have 2 + 18 = 20
sockets
10 CALLs = 2 + 60 = 62
sockets.
Sockets cache is going to fix
the problem of growth, but we
can initially allocate less
sockets
8. CALL path (new driver)
What was proposed to
change:
Simplify CALL socket pipeline
with REQ/REP pattern
Here we have only REQ +
REP 2 sockets per each call
(and 2 on proxy).
10 CALLs = 2 + 20 = 22
sockets compared to 62 on
previous slide.
9. CAST (new driver)
We use DEALER instead of
REQ for CAST to stay async
(not waiting for reply)
10. Fanout (old driver)
Fanout is a broad-cast over all
available nodes
It becomes tricky to detect
nodes listening to a specific
topic
To resolve this we have so-
called “Matchmaker” object
which has to return a list of
hosts by topic client-side
11. Matchmaker redis (old driver)
One of the good matchmaker
implementations includes
Redis storage which is synced
on its own
Server which starts to listen
some topic puts the topic and
its host IP to redis and this
info becomes available on all
nodes
When client starts fanout cast
it receives list of hosts from
matchmaker
12. Fanout (new driver)
Solution with Redis is also
applicable in new driver with
REQ/REP ROUTER/DEALER
But we also need to research
PUB/SUB + topic subscription
filter for fanout implementation
It includes XSUB/XPUB on
proxy and one additional port
(9502) is allocated for zmq
driver, because 9501 is
already in use by REQ/REP
13. Notifier (new driver)
Ceilometer is a service which
relies on this pattern
The best implementation of
notifier is PUB/SUB
With notifier it becomes easy
to collect some logging from
all over the cloud to a listener
node
14. New driver what and why?
The main reason to write a new driver instead of rewriting existing
one is a different underlying concept not just a refactoring reason
The old driver was designed as a universal socket pipeline
(PUSH/PULL forward and PUB/SUB backwards) to build all
messaging patterns on top of it
But such approach is not an effective one especially with ZeroMQ
15. New driver patterns usage
In the new driver we would
like to map ZeroMQ patterns
to oslo.messaging ones
directly, not reinventing or
building them from more
primitive ones
Please pay attention that all
socket pipelines are not
depending on each other and
may be run in separate
threads or even processes
We are going to have different
proxy for each socket pipeline
■ REQ/REP for CALL where we need reply waiting. It also may
be used for direct CAST but not necessarily so.
■ PUB/SUB for CAST+FANOUT and Notifier
■ We also can implement direct CAST with PUSH/PULL why
not?
16. What is even more important to have
Advanced diagnostics and testing
● Unit/Functional/Integration tests
● Logging
○ Inline
○ Centralized (using notifier) per node or on a specific node
● Interactive console client to check node’s state
Detailed documentation with graphics (not only
a text)