SlideShare a Scribd company logo
TechCon 2021
Connecting MQ and
Kafka together
Matt Leming
Architect, MQ for z/OS
lemingma@uk.ibm.com
© 2021 IBM Corporation
Messaging is essential for building fully connected, efficient and scalable solutions. More now than ever
before
Messaging is essential for building fully connected, efficient and scalable solutions. More now than ever
before
Critical exchange of information from
one system to another
Messages must get through, no
questions asked. The system must be
secure and distributed
Real-time event notification for
microservice scalability
Must be frictionless, scalable and
lightweight. The system must be simple
to exploit, invisible to the application
TechCon 2021 © 2021 IBM Corporation
Event streaming for data caching and
processing
Must maintain a projection of the data
for efficient repeatable processing. The
system must scale for large data
volumes
Messaging patterns
Messaging is essential for building fully connected, efficient and scalable solutions. More now than ever
before
Critical exchange of information from
one system to another
Messages must get through, no
questions asked. The system must be
secure and distributed
Real-time event notification for
microservice scalability
Must be frictionless, scalable and
lightweight. The system must be simple
to exploit, invisible to the application
TechCon 2021 © 2021 IBM Corporation
Event streaming for data caching and
processing
Must maintain a projection of the data
for efficient repeatable processing. The
system must scale for large data
volumes
Messaging patterns
IBM MQ Kafka
Messaging is essential for building fully connected, efficient and scalable solutions. More now than ever
before
Messages and events for communication and
notification
Systems rely on messages and events to
communicate between services in real time, not
just within single applications, but across
organisations and between them
Events for data persistence
Events represent past state changes, retaining
a record enables reply and blurs the line
between the role of messaging and data
storage
TechCon 2021 © 2021 IBM Corporation
Architectural patterns
IBM MQ Kafka
Connecting with Kafka
© 2021 IBM Corporation
Connecting MQ and Kafka:
why?
With IBM MQ and Apache Kafka specialising in
different aspects of the messaging spectrum, one on
connectivity and the other on data, solutions often
require messages to flow between the two
IBM MQ
Common scenarios:
Core banking system with MQ used as connectivity
backbone. Customer wants to take a copy of
messages from MQ and push them into Kafka for
analytics
Customer wants to extend core banking system to
emit data into Kafka, but doesn’t want to add
network latency that might affect SLAs, so uses
local queue manager as a bridge
Customer needs to get data into Kafka from z/OS
but doesn’t want to invest in Kafka skills so uses in-
house MQ experience
Customer needs to get data into z/OS from
distributed. Distributed development team has
experience with Kafka, z/OS team want to exploit
MQ integration with CICS / IMS
© 2021 IBM Corporation
Kafka Connect
As Kafka is commonly used for analytics there is a
need to be able to get data into it from a wide range of
sources
Kafka Connect provides a framework for this which
makes it easy to build connectors
Source connectors: data from external system into
Kafka
Sink connectors: data into external system from Kafka
Over 120 connectors exist today: IBM MQ, JDBC,
ElasticSearch, Splunk…
Some supported, some not
External
System
Kafka Connect worker
TOPIC
Kafka Connect worker
SINK
CONNECTOR
TOPIC
SOURCE
CONNECTOR
Kafka brokers
NB: a connector is actually divided up into a connector which
deals with configuration, and configuration changes, and a
set of tasks which do the copying of data. For this
presentation I will just use connector to refer to both
© 2021 IBM Corporation
IBM MQ - Kafka Connector
Several connectors exist for connecting MQ queues
to Kafka topics
Both source and sink connectors exist. The source
one is by far the most commonly used one today
IBM provides functionally identical connectors in two
forms:
Unsupported open source
Supported with CP4I
Confluent also provides a supported connector
IBM MQ
QUEUE:
TO.KAFKA
QUEUE:
FROM.KAFKA
Kafka Connect worker
TOPIC:
FROM.MQ
Kafka Connect worker
MQ SINK
CONNECTOR
TOPIC:
TO.MQ
MQ SOURCE
CONNECTOR
Kafka brokers
https://github.com/ibm-messaging/kafka-connect-mq-
source
https://github.com/ibm-messaging/kafka-connect-mq-sink
© 2021 IBM Corporation
Application options
The IBM source / sink connectors both require a
queue name
With the source connector there are three different
approaches that can be used to get messages onto
the queue to be consumed by the connector…
Application Application Application
Application
Direct to queue
#
TO.KAFKA
TO.KAFKA
TO.KAFKA
Subscribe to topic Streaming queue copy
CLIENT
QUEUE:
FROM.KAFKA
Kafka Connect worker
TOPIC:
FROM.MQ
Kafka Connect worker
MQ SINK
CONNECTOR
TOPIC:
TO.MQ
MQ SOURCE
CONNECTOR
CLIENT
Kafka brokers
IBM MQ
QUEUE:
TO.KAFKA
© 2021 IBM Corporation
Direct to queue
New applications, or applications that are being
changed, can just put the relevant data to the queue
used by the connector
Tends to be used when you don’t want to generate a
copy of existing messages
Often used on z/OS to remove the need for network
latency associated with a direct connection to Kafka
via REST, or for having to make use of more
complicated approaches like exploiting the Kafka Java
API in CICS
MQ Application
TO.KAFKA
© 2021 IBM Corporation
Subscribe to topic
If messages are already being published to a topic, its
trivial to generate another copy of them by using an
administrative subscription pointing to the queue used
by the connector
This approach is transparent to existing applications
MQ Application
TO.KAFKA
#
© 2021 IBM Corporation
Subscribe to topic
This approach can also be used to take a copy of
messages that are being sent to a queue
The queue is changed into a queue alias pointing to a
topic. Two administrative subscriptions are created,
one for application that used to consume from the
queue, the other for the connector
This approach might not be viable in all cases
depending on whether queue aliases have been used,
or whether the application can be adjusted to use
them.
It also relies on existing applications being able to
tolerate the changes to message ID, correlation ID,
etc when using pub / sub
MQ Application
TO.KAFKA
#
MQ Application
TO.APP
TO.APP
MQ Application
MQ Application
ALIAS.TO.APP
ALIAS.TO.APP
© 2021 IBM Corporation
Streaming queue copy
From 9.2.3 on distributed, there is an alternative option: use
streaming queues
Streaming queues allows messages being put to one queue to
be copied to a second queue without affecting the applications
using the first queue
For example:
DEF QL(TO.APP) STREAMQ(TO.KAFKA)
STRMQOS(MUSTDUP)
DEF QL(TO.KAFKA)
Says, when a message is sent to TO.APP a copy of that
message must be sent to TO.KAFKA
Enabling streaming queues has no effects on existing
applications as the original message doesn’t change, the
message sent to the second queue will is identical (*) to the
original message: same payload, message & correlation ID,
etc
MQ Application
TO.KAFKA
TO.APP STREAMQ(TO.KAFKA)
MQ Application
© 2021 IBM Corporation
The source connector in detail
The connector is Java based and so uses the MQ
JMS API to interact with MQ
Provided in the form of a jar file. If using the open
source connector, you build this yourself, otherwise
you download it from CP4I
The connector is installed into Kafka Connect and run
with a properties file containing configuration
Lots of flexibility in configuration:
Client / bindings mode
TLS including mutual auth
User id and password
JMS and MQ format messages
Message properties
Client mode connections are the default
CLIENT
IBM MQ
QUEUE:
TO.KAFKA
Kafka Connect worker
TOPIC:
FROM.MQ
MQ SOURCE
CONNECTOR
Kafka brokers
# The name of the target Kafka topic
topic=FROM.MQ
# The name of the MQ queue manager
mq.queue.manager=MQ1
mq.connection.mode=bindings
# The name of the source MQ queue
mq.queue=TO.KAFKA
Example properties
© 2021 IBM Corporation
Data conversion
Messages read in from MQ go through a number of
conversions before being sent to Kafka
If the connector receives a message it can’t convert it will
stop to prevent throwing the message away, so
understanding the data format is important!
When a message is read from MQ it’s payload is converted
to an internal format known as a source record
This conversion is done via a record builder. Two record
builders are provided with the connector (default, JSON) or
you can write your own
The source record is then converted into a Kafka message
This conversion is done via a converter provided by Kafka
(byte array, string, JSON)
The documentation for the connector provides
recommendations for common use cases
MQ message
MQMD
(MQRFH2)
Payload
Source record
Schema
Value
Kafka message
Key
Value
MQ source connector
IBM MQ
Record builder
Converter
© 2021 IBM Corporation
Partitioning
When messages are published to a Kafka topic they
need to be spread across the different partitions of the
topic
This is either done by a round-robin process, or if the
message contains a key, the hash of the key is used
to select a partition (same key => same partition)
Its also possible to write your own partitioning
implementation
By default the MQ source connector doesn’t specify a
key
However it can be configured to use the MQ message
ID, correlation ID, or the queue name as a key
Kafka message
Key
Value
Broker 1
Topic 1: partition 1
Broker 2
Topic 1: partition 2
Broker 3
Topic 1: partition 3
?
© 2021 IBM Corporation
Fault tolerance and scalability
Both MQ and Kafka are highly fault tolerant and
scalable. This extends to Kafka Connect and the MQ
connectors
Kafka Connect can be run in two modes:
Standalone: a single Kafka Connect worker process
runs the connector. This is useful for getting started
with the connectors, or if you want guaranteed
message ordering, but is a single point of failure, and
can be a scalability bottle neck
Distributed: multiple Kafka Connect worker
processes run across a set of machines and form a
Kafka Connect cluster. Connectors are run across a
number of these workers depending on configuration.
If a worker process running a connector fails, the
connector can be restarted on a different worker in the
cluster. The workers collaborate to ensure they each
have about the same amount of work
IBM MQ
Connect
Worker
Connect
Worker
Connect
Worker
Broker
Broker
Broker
Queue
Manager
Queue
Manager
Queue
Manager
© 2021 IBM Corporation
Fault tolerance and scalability
Messages are received from MQ in batches using a
JMS transacted session. I.e using a transaction
coordinated by the queue manager
If there is any failure in converting any MQ messages
then the transaction is rolled back and the messages
can be reprocessed later
The connector automatically deals with reconnections
to MQ if needed
Similarly the Kafka Connect framework automatically
deals with reconnections to Kafka if needed,
depending on configuration
Kafka doesn’t have the ability to take part in two
phase commit transactions. Therefore some failure
scenarios might end up with MQ messages being
written to the Kafka topic multiple times
IBM MQ
Connect
Worker
Connect
Worker
Connect
Worker
Broker
Broker
Broker
Queue
Manager
Queue
Manager
Queue
Manager
© 2021 IBM Corporation
Using the connector on z/OS
Lots of customers on z/OS have MQ and use it to
communicate between z/OS LPARs as well as
between z/OS and distributed
There has been a lot of interest in the MQ connector
on z/OS
It’s possible to run Kafka Connect on distributed and
connect to z/OS as a client
This is the same model you would likely use with
distributed MQ
However …
CLIENT
IBM MQ for z/OS
QUEUE:
TO.KAFKA
Kafka Connect
TOPIC:
FROM.MQ
MQ SOURCE
CONNECTOR
Kafka brokers
z/OS LPAR
© 2021 IBM Corporation
Using the connector on z/OS
An alternative is to run the connector on z/OS in USS
and connect to the queue manager in bindings
The connector then connects to the remote Kafka
cluster over the network. Connections to Kafka are
always network based
This model uses less MQ CPU, and because Kafka
connect is Java based it is zIIP eligible making costs
very completive
Kafka Connect works fine on z/OS. However various
properties files and shell scripts need to be converted
to EBCDIC first
The conversion is documented here:
https://ibm.github.io/event-
streams/connecting/mq/zos/
BINDINGS
IBM MQ for z/OS
QUEUE:
TO.KAFKA
TOPIC:
FROM.MQ
Kafka Connect
MQ SOURCE
CONNECTOR
Kafka brokers
z/OS LPAR
Connector
location
Connection to
MQ
Total
CPU US
MQ CPU
US
Connector
CPU US
USS Bindings 100.6
(4.9)
2.4 98.2
(2.5)
USS Client
(Advanced VUE
only)
152.9
(60.9)
58.4 94.5
(2.5)
Distributed Client 55.7 55.7 N/A
Values in brackets are if maximum zIIP offload is achieved
https://www.imwuc.org/HigherLogic/System/DownloadDocumentFile.ashx?Do
cumentFileKey=179ebb75-7be2-42aa-6f0e-818aeef805f2
© 2021 IBM Corporation
DEMO
© 2021 IBM Corporation
MQ Application
TO.KAFKA
TO.APP STREAMQ(TO.KAFKA)
MQ Application
Starting with the following on Linux:
MQ 9.2.3 installed
Latest Kafka installed
MQ source connector built and installed
Start Zookeeper
Start single Kafka broker
Define MQ queues
Send a couple of messages
Check they are there
Define the Kafka topic
Check connector is installed and configured
Start a Kafka consumer, nothing there…
Start the connector
Messages have now arrived
Send some more…
Thank you
© Copyright IBM Corporation 2021. All rights reserved. The information contained in these materials is provided for informational purposes only, and is provided AS IS without warranty of any kind,
express or implied. Any statement of direction represents IBM’s current intent, is subject to change or withdrawal, and represent only goals and objectives. IBM, the IBM logo, and ibm.com are trademarks
of IBM Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available at Copyright and
trademark information.
© 2021 IBM Corporation
Matt Leming
Architect, MQ for z/OS
lemingma@uk.ibm.com

More Related Content

What's hot

What's hot (20)

Whats new in MQ V9.1
Whats new in MQ V9.1Whats new in MQ V9.1
Whats new in MQ V9.1
 
Introducing github.com/open-cluster-management – How to deliver apps across c...
Introducing github.com/open-cluster-management – How to deliver apps across c...Introducing github.com/open-cluster-management – How to deliver apps across c...
Introducing github.com/open-cluster-management – How to deliver apps across c...
 
IBM MQ - What's new in 9.2
IBM MQ - What's new in 9.2IBM MQ - What's new in 9.2
IBM MQ - What's new in 9.2
 
Kafka Overview
Kafka OverviewKafka Overview
Kafka Overview
 
Building DataCenter networks with VXLAN BGP-EVPN
Building DataCenter networks with VXLAN BGP-EVPNBuilding DataCenter networks with VXLAN BGP-EVPN
Building DataCenter networks with VXLAN BGP-EVPN
 
IBM MQ Clustering (2017 version)
IBM MQ Clustering (2017 version)IBM MQ Clustering (2017 version)
IBM MQ Clustering (2017 version)
 
IBM MQ Online Tutorials
IBM MQ Online TutorialsIBM MQ Online Tutorials
IBM MQ Online Tutorials
 
Websphere MQ admin guide
Websphere MQ admin guideWebsphere MQ admin guide
Websphere MQ admin guide
 
Kafka Tutorial - DevOps, Admin and Ops
Kafka Tutorial - DevOps, Admin and OpsKafka Tutorial - DevOps, Admin and Ops
Kafka Tutorial - DevOps, Admin and Ops
 
Docker Kubernetes Istio
Docker Kubernetes IstioDocker Kubernetes Istio
Docker Kubernetes Istio
 
Service Mesh - Why? How? What?
Service Mesh - Why? How? What?Service Mesh - Why? How? What?
Service Mesh - Why? How? What?
 
IBM MQ Update, including 9.1.2 CD
IBM MQ Update, including 9.1.2 CDIBM MQ Update, including 9.1.2 CD
IBM MQ Update, including 9.1.2 CD
 
Deploying and managing IBM MQ in the Cloud
Deploying and managing IBM MQ in the CloudDeploying and managing IBM MQ in the Cloud
Deploying and managing IBM MQ in the Cloud
 
Apache Kafka - Messaging System Overview
Apache Kafka - Messaging System OverviewApache Kafka - Messaging System Overview
Apache Kafka - Messaging System Overview
 
Cisco Identity Services Engine (ISE)
Cisco Identity Services Engine (ISE)Cisco Identity Services Engine (ISE)
Cisco Identity Services Engine (ISE)
 
Introduction to Anypoint Runtime Fabric on Amazon Elastic Kubernetes Service ...
Introduction to Anypoint Runtime Fabric on Amazon Elastic Kubernetes Service ...Introduction to Anypoint Runtime Fabric on Amazon Elastic Kubernetes Service ...
Introduction to Anypoint Runtime Fabric on Amazon Elastic Kubernetes Service ...
 
Cisco nexus 7009 overview
Cisco nexus 7009 overviewCisco nexus 7009 overview
Cisco nexus 7009 overview
 
IBM MQ: An Introduction to Using and Developing with MQ Publish/Subscribe
IBM MQ: An Introduction to Using and Developing with MQ Publish/SubscribeIBM MQ: An Introduction to Using and Developing with MQ Publish/Subscribe
IBM MQ: An Introduction to Using and Developing with MQ Publish/Subscribe
 
Designing IBM MQ deployments for the cloud generation
Designing IBM MQ deployments for the cloud generationDesigning IBM MQ deployments for the cloud generation
Designing IBM MQ deployments for the cloud generation
 
Mq presentation
Mq presentationMq presentation
Mq presentation
 

Similar to Connecting mq&kafka

Cluster_Performance_Apache_Kafak_vs_RabbitMQ
Cluster_Performance_Apache_Kafak_vs_RabbitMQCluster_Performance_Apache_Kafak_vs_RabbitMQ
Cluster_Performance_Apache_Kafak_vs_RabbitMQ
Shameera Rathnayaka
 

Similar to Connecting mq&kafka (20)

What's New In MQ 9.2 on z/OS
What's New In MQ 9.2 on z/OSWhat's New In MQ 9.2 on z/OS
What's New In MQ 9.2 on z/OS
 
What's new with MQ on z/OS 9.3 and 9.3.1
What's new with MQ on z/OS 9.3 and 9.3.1What's new with MQ on z/OS 9.3 and 9.3.1
What's new with MQ on z/OS 9.3 and 9.3.1
 
Hello, kafka! (an introduction to apache kafka)
Hello, kafka! (an introduction to apache kafka)Hello, kafka! (an introduction to apache kafka)
Hello, kafka! (an introduction to apache kafka)
 
What's new in IBM MQ, March 2018
What's new in IBM MQ, March 2018What's new in IBM MQ, March 2018
What's new in IBM MQ, March 2018
 
Introduction to Kafka
Introduction to KafkaIntroduction to Kafka
Introduction to Kafka
 
Lessons Learned Building a Connector Using Kafka Connect (Katherine Stanley &...
Lessons Learned Building a Connector Using Kafka Connect (Katherine Stanley &...Lessons Learned Building a Connector Using Kafka Connect (Katherine Stanley &...
Lessons Learned Building a Connector Using Kafka Connect (Katherine Stanley &...
 
What's new in MQ 9.1.* on z/OS
What's new in MQ 9.1.* on z/OSWhat's new in MQ 9.1.* on z/OS
What's new in MQ 9.1.* on z/OS
 
Apache kafka
Apache kafkaApache kafka
Apache kafka
 
SA UNIT II KAFKA.pdf
SA UNIT II KAFKA.pdfSA UNIT II KAFKA.pdf
SA UNIT II KAFKA.pdf
 
Apache kafka
Apache kafkaApache kafka
Apache kafka
 
IBM MQ for z/OS The Latest and Greatest Enhancements
IBM MQ for z/OS The Latest and Greatest EnhancementsIBM MQ for z/OS The Latest and Greatest Enhancements
IBM MQ for z/OS The Latest and Greatest Enhancements
 
CLUSTER COMPUTING
CLUSTER COMPUTINGCLUSTER COMPUTING
CLUSTER COMPUTING
 
MQ Support for z/OS Connect
MQ Support for z/OS ConnectMQ Support for z/OS Connect
MQ Support for z/OS Connect
 
IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...
IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...
IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...
 
Cluster_Performance_Apache_Kafak_vs_RabbitMQ
Cluster_Performance_Apache_Kafak_vs_RabbitMQCluster_Performance_Apache_Kafak_vs_RabbitMQ
Cluster_Performance_Apache_Kafak_vs_RabbitMQ
 
IBM MQ Whats new - up to 9.3.4.pdf
IBM MQ Whats new - up to 9.3.4.pdfIBM MQ Whats new - up to 9.3.4.pdf
IBM MQ Whats new - up to 9.3.4.pdf
 
Lessons Learned Building a Connector Using Kafka Connect (Katherine Stanley &...
Lessons Learned Building a Connector Using Kafka Connect (Katherine Stanley &...Lessons Learned Building a Connector Using Kafka Connect (Katherine Stanley &...
Lessons Learned Building a Connector Using Kafka Connect (Katherine Stanley &...
 
Confluent & Attunity: Mainframe Data Modern Analytics
Confluent & Attunity: Mainframe Data Modern AnalyticsConfluent & Attunity: Mainframe Data Modern Analytics
Confluent & Attunity: Mainframe Data Modern Analytics
 
The enterprise differentiator of mq on zos
The enterprise differentiator of mq on zosThe enterprise differentiator of mq on zos
The enterprise differentiator of mq on zos
 
Kubernetes connectivity to Cloud Native Kafka | Evan Shortiss and Hugo Guerre...
Kubernetes connectivity to Cloud Native Kafka | Evan Shortiss and Hugo Guerre...Kubernetes connectivity to Cloud Native Kafka | Evan Shortiss and Hugo Guerre...
Kubernetes connectivity to Cloud Native Kafka | Evan Shortiss and Hugo Guerre...
 

More from Matt Leming

More from Matt Leming (11)

533-MigratingYourMQIApplicationsToJMS.pdf
533-MigratingYourMQIApplicationsToJMS.pdf533-MigratingYourMQIApplicationsToJMS.pdf
533-MigratingYourMQIApplicationsToJMS.pdf
 
IBM MQ Whats new - up to 9.3.4.pptx
IBM MQ Whats new - up to 9.3.4.pptxIBM MQ Whats new - up to 9.3.4.pptx
IBM MQ Whats new - up to 9.3.4.pptx
 
Going Deep with MQ
Going Deep with MQGoing Deep with MQ
Going Deep with MQ
 
Building a resilient and scalable solution with IBM MQ on z/OS
Building a resilient and scalable solution with IBM MQ on z/OSBuilding a resilient and scalable solution with IBM MQ on z/OS
Building a resilient and scalable solution with IBM MQ on z/OS
 
Where is my MQ message on z/OS?
Where is my MQ message on z/OS?Where is my MQ message on z/OS?
Where is my MQ message on z/OS?
 
What's new in MQ 9.1 on z/OS
What's new in MQ 9.1 on z/OSWhat's new in MQ 9.1 on z/OS
What's new in MQ 9.1 on z/OS
 
Where is My Message
Where is My MessageWhere is My Message
Where is My Message
 
New Tools and Interfaces for Managing IBM MQ
New Tools and Interfaces for Managing IBM MQNew Tools and Interfaces for Managing IBM MQ
New Tools and Interfaces for Managing IBM MQ
 
HHM-2833: Where is My Message?: Using IBM MQ Tools to Work Out What Applicati...
HHM-2833: Where is My Message?: Using IBM MQ Tools to Work Out What Applicati...HHM-2833: Where is My Message?: Using IBM MQ Tools to Work Out What Applicati...
HHM-2833: Where is My Message?: Using IBM MQ Tools to Work Out What Applicati...
 
HHM-3540: The IBM MQ Light API: From Developer Laptop to Enterprise Data Cen...
 HHM-3540: The IBM MQ Light API: From Developer Laptop to Enterprise Data Cen... HHM-3540: The IBM MQ Light API: From Developer Laptop to Enterprise Data Cen...
HHM-3540: The IBM MQ Light API: From Developer Laptop to Enterprise Data Cen...
 
HHM-3481: IBM MQ for z/OS: Enhancing Application and Messaging Connectivity ...
 HHM-3481: IBM MQ for z/OS: Enhancing Application and Messaging Connectivity ... HHM-3481: IBM MQ for z/OS: Enhancing Application and Messaging Connectivity ...
HHM-3481: IBM MQ for z/OS: Enhancing Application and Messaging Connectivity ...
 

Recently uploaded

Mastering Windows 7 A Comprehensive Guide for Power Users .pdf
Mastering Windows 7 A Comprehensive Guide for Power Users .pdfMastering Windows 7 A Comprehensive Guide for Power Users .pdf
Mastering Windows 7 A Comprehensive Guide for Power Users .pdf
mbmh111980
 

Recently uploaded (20)

Studiovity film pre-production and screenwriting software
Studiovity film pre-production and screenwriting softwareStudiovity film pre-production and screenwriting software
Studiovity film pre-production and screenwriting software
 
How to install and activate eGrabber JobGrabber
How to install and activate eGrabber JobGrabberHow to install and activate eGrabber JobGrabber
How to install and activate eGrabber JobGrabber
 
A Guideline to Zendesk to Re:amaze Data Migration
A Guideline to Zendesk to Re:amaze Data MigrationA Guideline to Zendesk to Re:amaze Data Migration
A Guideline to Zendesk to Re:amaze Data Migration
 
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?
 
Benefits of Employee Monitoring Software
Benefits of  Employee Monitoring SoftwareBenefits of  Employee Monitoring Software
Benefits of Employee Monitoring Software
 
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERROR
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERROR
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERROR
 
How To Build a Successful SaaS Design.pdf
How To Build a Successful SaaS Design.pdfHow To Build a Successful SaaS Design.pdf
How To Build a Successful SaaS Design.pdf
 
Into the Box 2024 - Keynote Day 2 Slides.pdf
Into the Box 2024 - Keynote Day 2 Slides.pdfInto the Box 2024 - Keynote Day 2 Slides.pdf
Into the Box 2024 - Keynote Day 2 Slides.pdf
 
Crafting the Perfect Measurement Sheet with PLM Integration
Crafting the Perfect Measurement Sheet with PLM IntegrationCrafting the Perfect Measurement Sheet with PLM Integration
Crafting the Perfect Measurement Sheet with PLM Integration
 
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital TransformationWSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
 
Facemoji Keyboard released its 2023 State of Emoji report, outlining the most...
Facemoji Keyboard released its 2023 State of Emoji report, outlining the most...Facemoji Keyboard released its 2023 State of Emoji report, outlining the most...
Facemoji Keyboard released its 2023 State of Emoji report, outlining the most...
 
Breaking the Code : A Guide to WhatsApp Business API.pdf
Breaking the Code : A Guide to WhatsApp Business API.pdfBreaking the Code : A Guide to WhatsApp Business API.pdf
Breaking the Code : A Guide to WhatsApp Business API.pdf
 
10 Essential Software Testing Tools You Need to Know About.pdf
10 Essential Software Testing Tools You Need to Know About.pdf10 Essential Software Testing Tools You Need to Know About.pdf
10 Essential Software Testing Tools You Need to Know About.pdf
 
Top Mobile App Development Companies 2024
Top Mobile App Development Companies 2024Top Mobile App Development Companies 2024
Top Mobile App Development Companies 2024
 
Mastering Windows 7 A Comprehensive Guide for Power Users .pdf
Mastering Windows 7 A Comprehensive Guide for Power Users .pdfMastering Windows 7 A Comprehensive Guide for Power Users .pdf
Mastering Windows 7 A Comprehensive Guide for Power Users .pdf
 
Accelerate Enterprise Software Engineering with Platformless
Accelerate Enterprise Software Engineering with PlatformlessAccelerate Enterprise Software Engineering with Platformless
Accelerate Enterprise Software Engineering with Platformless
 
Secure Software Ecosystem Teqnation 2024
Secure Software Ecosystem Teqnation 2024Secure Software Ecosystem Teqnation 2024
Secure Software Ecosystem Teqnation 2024
 
iGaming Platform & Lottery Solutions by Skilrock
iGaming Platform & Lottery Solutions by SkilrockiGaming Platform & Lottery Solutions by Skilrock
iGaming Platform & Lottery Solutions by Skilrock
 
A Guideline to Gorgias to to Re:amaze Data Migration
A Guideline to Gorgias to to Re:amaze Data MigrationA Guideline to Gorgias to to Re:amaze Data Migration
A Guideline to Gorgias to to Re:amaze Data Migration
 
Using IESVE for Room Loads Analysis - Australia & New Zealand
Using IESVE for Room Loads Analysis - Australia & New ZealandUsing IESVE for Room Loads Analysis - Australia & New Zealand
Using IESVE for Room Loads Analysis - Australia & New Zealand
 

Connecting mq&kafka

  • 1. TechCon 2021 Connecting MQ and Kafka together Matt Leming Architect, MQ for z/OS lemingma@uk.ibm.com
  • 2. © 2021 IBM Corporation Messaging is essential for building fully connected, efficient and scalable solutions. More now than ever before
  • 3. Messaging is essential for building fully connected, efficient and scalable solutions. More now than ever before Critical exchange of information from one system to another Messages must get through, no questions asked. The system must be secure and distributed Real-time event notification for microservice scalability Must be frictionless, scalable and lightweight. The system must be simple to exploit, invisible to the application TechCon 2021 © 2021 IBM Corporation Event streaming for data caching and processing Must maintain a projection of the data for efficient repeatable processing. The system must scale for large data volumes Messaging patterns
  • 4. Messaging is essential for building fully connected, efficient and scalable solutions. More now than ever before Critical exchange of information from one system to another Messages must get through, no questions asked. The system must be secure and distributed Real-time event notification for microservice scalability Must be frictionless, scalable and lightweight. The system must be simple to exploit, invisible to the application TechCon 2021 © 2021 IBM Corporation Event streaming for data caching and processing Must maintain a projection of the data for efficient repeatable processing. The system must scale for large data volumes Messaging patterns IBM MQ Kafka
  • 5. Messaging is essential for building fully connected, efficient and scalable solutions. More now than ever before Messages and events for communication and notification Systems rely on messages and events to communicate between services in real time, not just within single applications, but across organisations and between them Events for data persistence Events represent past state changes, retaining a record enables reply and blurs the line between the role of messaging and data storage TechCon 2021 © 2021 IBM Corporation Architectural patterns IBM MQ Kafka
  • 7. © 2021 IBM Corporation Connecting MQ and Kafka: why? With IBM MQ and Apache Kafka specialising in different aspects of the messaging spectrum, one on connectivity and the other on data, solutions often require messages to flow between the two IBM MQ Common scenarios: Core banking system with MQ used as connectivity backbone. Customer wants to take a copy of messages from MQ and push them into Kafka for analytics Customer wants to extend core banking system to emit data into Kafka, but doesn’t want to add network latency that might affect SLAs, so uses local queue manager as a bridge Customer needs to get data into Kafka from z/OS but doesn’t want to invest in Kafka skills so uses in- house MQ experience Customer needs to get data into z/OS from distributed. Distributed development team has experience with Kafka, z/OS team want to exploit MQ integration with CICS / IMS
  • 8. © 2021 IBM Corporation Kafka Connect As Kafka is commonly used for analytics there is a need to be able to get data into it from a wide range of sources Kafka Connect provides a framework for this which makes it easy to build connectors Source connectors: data from external system into Kafka Sink connectors: data into external system from Kafka Over 120 connectors exist today: IBM MQ, JDBC, ElasticSearch, Splunk… Some supported, some not External System Kafka Connect worker TOPIC Kafka Connect worker SINK CONNECTOR TOPIC SOURCE CONNECTOR Kafka brokers NB: a connector is actually divided up into a connector which deals with configuration, and configuration changes, and a set of tasks which do the copying of data. For this presentation I will just use connector to refer to both
  • 9. © 2021 IBM Corporation IBM MQ - Kafka Connector Several connectors exist for connecting MQ queues to Kafka topics Both source and sink connectors exist. The source one is by far the most commonly used one today IBM provides functionally identical connectors in two forms: Unsupported open source Supported with CP4I Confluent also provides a supported connector IBM MQ QUEUE: TO.KAFKA QUEUE: FROM.KAFKA Kafka Connect worker TOPIC: FROM.MQ Kafka Connect worker MQ SINK CONNECTOR TOPIC: TO.MQ MQ SOURCE CONNECTOR Kafka brokers https://github.com/ibm-messaging/kafka-connect-mq- source https://github.com/ibm-messaging/kafka-connect-mq-sink
  • 10. © 2021 IBM Corporation Application options The IBM source / sink connectors both require a queue name With the source connector there are three different approaches that can be used to get messages onto the queue to be consumed by the connector… Application Application Application Application Direct to queue # TO.KAFKA TO.KAFKA TO.KAFKA Subscribe to topic Streaming queue copy CLIENT QUEUE: FROM.KAFKA Kafka Connect worker TOPIC: FROM.MQ Kafka Connect worker MQ SINK CONNECTOR TOPIC: TO.MQ MQ SOURCE CONNECTOR CLIENT Kafka brokers IBM MQ QUEUE: TO.KAFKA
  • 11. © 2021 IBM Corporation Direct to queue New applications, or applications that are being changed, can just put the relevant data to the queue used by the connector Tends to be used when you don’t want to generate a copy of existing messages Often used on z/OS to remove the need for network latency associated with a direct connection to Kafka via REST, or for having to make use of more complicated approaches like exploiting the Kafka Java API in CICS MQ Application TO.KAFKA
  • 12. © 2021 IBM Corporation Subscribe to topic If messages are already being published to a topic, its trivial to generate another copy of them by using an administrative subscription pointing to the queue used by the connector This approach is transparent to existing applications MQ Application TO.KAFKA #
  • 13. © 2021 IBM Corporation Subscribe to topic This approach can also be used to take a copy of messages that are being sent to a queue The queue is changed into a queue alias pointing to a topic. Two administrative subscriptions are created, one for application that used to consume from the queue, the other for the connector This approach might not be viable in all cases depending on whether queue aliases have been used, or whether the application can be adjusted to use them. It also relies on existing applications being able to tolerate the changes to message ID, correlation ID, etc when using pub / sub MQ Application TO.KAFKA # MQ Application TO.APP TO.APP MQ Application MQ Application ALIAS.TO.APP ALIAS.TO.APP
  • 14. © 2021 IBM Corporation Streaming queue copy From 9.2.3 on distributed, there is an alternative option: use streaming queues Streaming queues allows messages being put to one queue to be copied to a second queue without affecting the applications using the first queue For example: DEF QL(TO.APP) STREAMQ(TO.KAFKA) STRMQOS(MUSTDUP) DEF QL(TO.KAFKA) Says, when a message is sent to TO.APP a copy of that message must be sent to TO.KAFKA Enabling streaming queues has no effects on existing applications as the original message doesn’t change, the message sent to the second queue will is identical (*) to the original message: same payload, message & correlation ID, etc MQ Application TO.KAFKA TO.APP STREAMQ(TO.KAFKA) MQ Application
  • 15. © 2021 IBM Corporation The source connector in detail The connector is Java based and so uses the MQ JMS API to interact with MQ Provided in the form of a jar file. If using the open source connector, you build this yourself, otherwise you download it from CP4I The connector is installed into Kafka Connect and run with a properties file containing configuration Lots of flexibility in configuration: Client / bindings mode TLS including mutual auth User id and password JMS and MQ format messages Message properties Client mode connections are the default CLIENT IBM MQ QUEUE: TO.KAFKA Kafka Connect worker TOPIC: FROM.MQ MQ SOURCE CONNECTOR Kafka brokers # The name of the target Kafka topic topic=FROM.MQ # The name of the MQ queue manager mq.queue.manager=MQ1 mq.connection.mode=bindings # The name of the source MQ queue mq.queue=TO.KAFKA Example properties
  • 16. © 2021 IBM Corporation Data conversion Messages read in from MQ go through a number of conversions before being sent to Kafka If the connector receives a message it can’t convert it will stop to prevent throwing the message away, so understanding the data format is important! When a message is read from MQ it’s payload is converted to an internal format known as a source record This conversion is done via a record builder. Two record builders are provided with the connector (default, JSON) or you can write your own The source record is then converted into a Kafka message This conversion is done via a converter provided by Kafka (byte array, string, JSON) The documentation for the connector provides recommendations for common use cases MQ message MQMD (MQRFH2) Payload Source record Schema Value Kafka message Key Value MQ source connector IBM MQ Record builder Converter
  • 17. © 2021 IBM Corporation Partitioning When messages are published to a Kafka topic they need to be spread across the different partitions of the topic This is either done by a round-robin process, or if the message contains a key, the hash of the key is used to select a partition (same key => same partition) Its also possible to write your own partitioning implementation By default the MQ source connector doesn’t specify a key However it can be configured to use the MQ message ID, correlation ID, or the queue name as a key Kafka message Key Value Broker 1 Topic 1: partition 1 Broker 2 Topic 1: partition 2 Broker 3 Topic 1: partition 3 ?
  • 18. © 2021 IBM Corporation Fault tolerance and scalability Both MQ and Kafka are highly fault tolerant and scalable. This extends to Kafka Connect and the MQ connectors Kafka Connect can be run in two modes: Standalone: a single Kafka Connect worker process runs the connector. This is useful for getting started with the connectors, or if you want guaranteed message ordering, but is a single point of failure, and can be a scalability bottle neck Distributed: multiple Kafka Connect worker processes run across a set of machines and form a Kafka Connect cluster. Connectors are run across a number of these workers depending on configuration. If a worker process running a connector fails, the connector can be restarted on a different worker in the cluster. The workers collaborate to ensure they each have about the same amount of work IBM MQ Connect Worker Connect Worker Connect Worker Broker Broker Broker Queue Manager Queue Manager Queue Manager
  • 19. © 2021 IBM Corporation Fault tolerance and scalability Messages are received from MQ in batches using a JMS transacted session. I.e using a transaction coordinated by the queue manager If there is any failure in converting any MQ messages then the transaction is rolled back and the messages can be reprocessed later The connector automatically deals with reconnections to MQ if needed Similarly the Kafka Connect framework automatically deals with reconnections to Kafka if needed, depending on configuration Kafka doesn’t have the ability to take part in two phase commit transactions. Therefore some failure scenarios might end up with MQ messages being written to the Kafka topic multiple times IBM MQ Connect Worker Connect Worker Connect Worker Broker Broker Broker Queue Manager Queue Manager Queue Manager
  • 20. © 2021 IBM Corporation Using the connector on z/OS Lots of customers on z/OS have MQ and use it to communicate between z/OS LPARs as well as between z/OS and distributed There has been a lot of interest in the MQ connector on z/OS It’s possible to run Kafka Connect on distributed and connect to z/OS as a client This is the same model you would likely use with distributed MQ However … CLIENT IBM MQ for z/OS QUEUE: TO.KAFKA Kafka Connect TOPIC: FROM.MQ MQ SOURCE CONNECTOR Kafka brokers z/OS LPAR
  • 21. © 2021 IBM Corporation Using the connector on z/OS An alternative is to run the connector on z/OS in USS and connect to the queue manager in bindings The connector then connects to the remote Kafka cluster over the network. Connections to Kafka are always network based This model uses less MQ CPU, and because Kafka connect is Java based it is zIIP eligible making costs very completive Kafka Connect works fine on z/OS. However various properties files and shell scripts need to be converted to EBCDIC first The conversion is documented here: https://ibm.github.io/event- streams/connecting/mq/zos/ BINDINGS IBM MQ for z/OS QUEUE: TO.KAFKA TOPIC: FROM.MQ Kafka Connect MQ SOURCE CONNECTOR Kafka brokers z/OS LPAR Connector location Connection to MQ Total CPU US MQ CPU US Connector CPU US USS Bindings 100.6 (4.9) 2.4 98.2 (2.5) USS Client (Advanced VUE only) 152.9 (60.9) 58.4 94.5 (2.5) Distributed Client 55.7 55.7 N/A Values in brackets are if maximum zIIP offload is achieved https://www.imwuc.org/HigherLogic/System/DownloadDocumentFile.ashx?Do cumentFileKey=179ebb75-7be2-42aa-6f0e-818aeef805f2
  • 22. © 2021 IBM Corporation DEMO
  • 23. © 2021 IBM Corporation MQ Application TO.KAFKA TO.APP STREAMQ(TO.KAFKA) MQ Application Starting with the following on Linux: MQ 9.2.3 installed Latest Kafka installed MQ source connector built and installed
  • 27. Send a couple of messages
  • 28. Check they are there
  • 30. Check connector is installed and configured
  • 31. Start a Kafka consumer, nothing there…
  • 33. Messages have now arrived
  • 35. Thank you © Copyright IBM Corporation 2021. All rights reserved. The information contained in these materials is provided for informational purposes only, and is provided AS IS without warranty of any kind, express or implied. Any statement of direction represents IBM’s current intent, is subject to change or withdrawal, and represent only goals and objectives. IBM, the IBM logo, and ibm.com are trademarks of IBM Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available at Copyright and trademark information. © 2021 IBM Corporation Matt Leming Architect, MQ for z/OS lemingma@uk.ibm.com

Editor's Notes

  1. Planned delivery date: IBM MQ v9.2.3 (July 2021) NOT ANNOUNCED User problem: Today, customers have to modify a point-to-point message flow to be a pub/sub flow to syphon off messages. The syphoned messages are not exact duplicates (so less useful for audit etc) and it impacts existing applications so the majority of customers see this as too risky a change. Examples: Customers want to tap into existing critical data that flows faultlessly over MQ without disrupting existing mission critical applications. #1 Customers want to route data in real-time into AI engines. Customers want to build new message flows to flow from MQ into event streaming applications and vice versa. #2 Administration teams can enable specific developers or application teams to access to ‘real world’ messages to more accurately simulate production workload for testing purposes (greater team agility as they no longer have to create sample message data for specific applications). Administrators will have the ability to gain greater confidence that changes aren’t going to break when pushed into production. Application teams who want to upgrade applications can compare characteristics using real-world data, such as performance, of old and new, side-by-side. #3 Simpler and more accurate data available for auditing purposes If a disaster occurs, customers can rebuild systems and replay messages from certain times. One thing to note for replay is that some customers may be ok using messages that are on a ‘best effort’ or that only have the message content for replay…. So we may be able to address this usecase within the initial release. Planned extensions longer term include using duplicated messages to better understand health and automate MQ tuning for optimal performance and architectural set up.
  2. Planned delivery date: IBM MQ v9.2.3 (July 2021) NOT ANNOUNCED User problem: Today, customers have to modify a point-to-point message flow to be a pub/sub flow to syphon off messages. The syphoned messages are not exact duplicates (so less useful for audit etc) and it impacts existing applications so the majority of customers see this as too risky a change. Examples: Customers want to tap into existing critical data that flows faultlessly over MQ without disrupting existing mission critical applications. #1 Customers want to route data in real-time into AI engines. Customers want to build new message flows to flow from MQ into event streaming applications and vice versa. #2 Administration teams can enable specific developers or application teams to access to ‘real world’ messages to more accurately simulate production workload for testing purposes (greater team agility as they no longer have to create sample message data for specific applications). Administrators will have the ability to gain greater confidence that changes aren’t going to break when pushed into production. Application teams who want to upgrade applications can compare characteristics using real-world data, such as performance, of old and new, side-by-side. #3 Simpler and more accurate data available for auditing purposes If a disaster occurs, customers can rebuild systems and replay messages from certain times. One thing to note for replay is that some customers may be ok using messages that are on a ‘best effort’ or that only have the message content for replay…. So we may be able to address this usecase within the initial release. Planned extensions longer term include using duplicated messages to better understand health and automate MQ tuning for optimal performance and architectural set up.
  3. Planned delivery date: IBM MQ v9.2.3 (July 2021) NOT ANNOUNCED User problem: Today, customers have to modify a point-to-point message flow to be a pub/sub flow to syphon off messages. The syphoned messages are not exact duplicates (so less useful for audit etc) and it impacts existing applications so the majority of customers see this as too risky a change. Examples: Customers want to tap into existing critical data that flows faultlessly over MQ without disrupting existing mission critical applications. #1 Customers want to route data in real-time into AI engines. Customers want to build new message flows to flow from MQ into event streaming applications and vice versa. #2 Administration teams can enable specific developers or application teams to access to ‘real world’ messages to more accurately simulate production workload for testing purposes (greater team agility as they no longer have to create sample message data for specific applications). Administrators will have the ability to gain greater confidence that changes aren’t going to break when pushed into production. Application teams who want to upgrade applications can compare characteristics using real-world data, such as performance, of old and new, side-by-side. #3 Simpler and more accurate data available for auditing purposes If a disaster occurs, customers can rebuild systems and replay messages from certain times. One thing to note for replay is that some customers may be ok using messages that are on a ‘best effort’ or that only have the message content for replay…. So we may be able to address this usecase within the initial release. Planned extensions longer term include using duplicated messages to better understand health and automate MQ tuning for optimal performance and architectural set up.
  4. Planned delivery date: IBM MQ v9.2.3 (July 2021) NOT ANNOUNCED User problem: Today, customers have to modify a point-to-point message flow to be a pub/sub flow to syphon off messages. The syphoned messages are not exact duplicates (so less useful for audit etc) and it impacts existing applications so the majority of customers see this as too risky a change. Examples: Customers want to tap into existing critical data that flows faultlessly over MQ without disrupting existing mission critical applications. #1 Customers want to route data in real-time into AI engines. Customers want to build new message flows to flow from MQ into event streaming applications and vice versa. #2 Administration teams can enable specific developers or application teams to access to ‘real world’ messages to more accurately simulate production workload for testing purposes (greater team agility as they no longer have to create sample message data for specific applications). Administrators will have the ability to gain greater confidence that changes aren’t going to break when pushed into production. Application teams who want to upgrade applications can compare characteristics using real-world data, such as performance, of old and new, side-by-side. #3 Simpler and more accurate data available for auditing purposes If a disaster occurs, customers can rebuild systems and replay messages from certain times. One thing to note for replay is that some customers may be ok using messages that are on a ‘best effort’ or that only have the message content for replay…. So we may be able to address this usecase within the initial release. Planned extensions longer term include using duplicated messages to better understand health and automate MQ tuning for optimal performance and architectural set up.
  5. Planned delivery date: IBM MQ v9.2.3 (July 2021) NOT ANNOUNCED User problem: Today, customers have to modify a point-to-point message flow to be a pub/sub flow to syphon off messages. The syphoned messages are not exact duplicates (so less useful for audit etc) and it impacts existing applications so the majority of customers see this as too risky a change. Examples: Customers want to tap into existing critical data that flows faultlessly over MQ without disrupting existing mission critical applications. #1 Customers want to route data in real-time into AI engines. Customers want to build new message flows to flow from MQ into event streaming applications and vice versa. #2 Administration teams can enable specific developers or application teams to access to ‘real world’ messages to more accurately simulate production workload for testing purposes (greater team agility as they no longer have to create sample message data for specific applications). Administrators will have the ability to gain greater confidence that changes aren’t going to break when pushed into production. Application teams who want to upgrade applications can compare characteristics using real-world data, such as performance, of old and new, side-by-side. #3 Simpler and more accurate data available for auditing purposes If a disaster occurs, customers can rebuild systems and replay messages from certain times. One thing to note for replay is that some customers may be ok using messages that are on a ‘best effort’ or that only have the message content for replay…. So we may be able to address this usecase within the initial release. Planned extensions longer term include using duplicated messages to better understand health and automate MQ tuning for optimal performance and architectural set up.
  6. Planned delivery date: IBM MQ v9.2.3 (July 2021) NOT ANNOUNCED User problem: Today, customers have to modify a point-to-point message flow to be a pub/sub flow to syphon off messages. The syphoned messages are not exact duplicates (so less useful for audit etc) and it impacts existing applications so the majority of customers see this as too risky a change. Examples: Customers want to tap into existing critical data that flows faultlessly over MQ without disrupting existing mission critical applications. #1 Customers want to route data in real-time into AI engines. Customers want to build new message flows to flow from MQ into event streaming applications and vice versa. #2 Administration teams can enable specific developers or application teams to access to ‘real world’ messages to more accurately simulate production workload for testing purposes (greater team agility as they no longer have to create sample message data for specific applications). Administrators will have the ability to gain greater confidence that changes aren’t going to break when pushed into production. Application teams who want to upgrade applications can compare characteristics using real-world data, such as performance, of old and new, side-by-side. #3 Simpler and more accurate data available for auditing purposes If a disaster occurs, customers can rebuild systems and replay messages from certain times. One thing to note for replay is that some customers may be ok using messages that are on a ‘best effort’ or that only have the message content for replay…. So we may be able to address this usecase within the initial release. Planned extensions longer term include using duplicated messages to better understand health and automate MQ tuning for optimal performance and architectural set up.
  7. Planned delivery date: IBM MQ v9.2.3 (July 2021) NOT ANNOUNCED User problem: Today, customers have to modify a point-to-point message flow to be a pub/sub flow to syphon off messages. The syphoned messages are not exact duplicates (so less useful for audit etc) and it impacts existing applications so the majority of customers see this as too risky a change. Examples: Customers want to tap into existing critical data that flows faultlessly over MQ without disrupting existing mission critical applications. #1 Customers want to route data in real-time into AI engines. Customers want to build new message flows to flow from MQ into event streaming applications and vice versa. #2 Administration teams can enable specific developers or application teams to access to ‘real world’ messages to more accurately simulate production workload for testing purposes (greater team agility as they no longer have to create sample message data for specific applications). Administrators will have the ability to gain greater confidence that changes aren’t going to break when pushed into production. Application teams who want to upgrade applications can compare characteristics using real-world data, such as performance, of old and new, side-by-side. #3 Simpler and more accurate data available for auditing purposes If a disaster occurs, customers can rebuild systems and replay messages from certain times. One thing to note for replay is that some customers may be ok using messages that are on a ‘best effort’ or that only have the message content for replay…. So we may be able to address this usecase within the initial release. Planned extensions longer term include using duplicated messages to better understand health and automate MQ tuning for optimal performance and architectural set up.