Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

0

Share

Download to read offline

Hhm 3479 mq clustering and shared queues for high availability

Download to read offline

we review clustering and shared queue technologies, their differences and synergies, as a foundation for building a highly available messaging service with resilience during both planned and unplanned outages of z Systems components.

Related Books

Free with a 30 day trial from Scribd

See all

Related Audiobooks

Free with a 30 day trial from Scribd

See all
  • Be the first to like this

Hhm 3479 mq clustering and shared queues for high availability

  1. 1. HHM-3479 MQ Clustering and Shared Queues for High Availability Pete Siddall pete_siddall@uk.ibm.com
  2. 2. Agenda • Queue Sharing Groups with Shared Queues • MQ Queue Manager Clustering • Channel Technologies – A Comparison – Shared channels – Cluster channels – Normal channels • Best Practice Scenario Example
  3. 3. Queue Sharing Groups
  4. 4. What is a queue-sharing group (QSG)? • High availability for MQ resources • Queue managers share queues – Applications can access messages on shared queues from any queue manager in the same QSG – Each queue manager also has its own private queues • Shared queues stored in the Coupling Facility (CF) – Large messages offloaded to DB2 or SMDS to maximise capacity • Applications can connect to a named queue manager or any available queue manager using the QSG name Queue manager Private queues Queue manager Private queues Queue-sharing group Queue manager Private queues Queue manager Private queues Shared queues CFCF
  5. 5. Components of a queue-sharing group GUIDE SHARE EUROPE Shared Objects DB2 data sharing group CF Shared queues QMGR Private queues Private objects QMGR Channel Initiator QMGR Private queues Private objects QMGR Channel Initiator DB2A DB2B DB2C Msg data > 63KB SMDS SMDS Queue-sharing group
  6. 6. Advantages of shared queues • Allows MQ applications to be: – Scalable – Highly available • Allows workload balancing to be implemented – Natural pull-workload balancing based on processing capacity of each queue manager • No outages for shared queue applications – Can stagger outages of each queue manager in the QSG even during upgrades • Flexible capacity management – Can dynamically add or remove queue managers and queues • Peer recovery – MQ detects if a queue manager abnormally disconnects from the Coupling Facility – Another queue manager in the QSG completes pending units of work, where possible – Uncommitted gets (under sync-point) are backed out • Messages can be ‘re-got’ by another queue manager – Uncommitted puts (under sync-point) are committed • Message made available as soon as possible
  7. 7. Failure and persistence Shared queues Coupling Facility failure Queue manager Private queues Queue manager Private queues Nonpersistent messages on private queues OK (kept) Messages on shared queues OK (kept) Queue manager failure Persistent messages on shared queues restored from log Queue manager Private queues Queue manager Private queues Queue manager Private queues Queue manager Private queues Queue manager Private queues Nonpersistent messages on private queues lost (deleted) Shared queues Nonpersistent messages on shared queues lost (deleted)
  8. 8. A history of large shared-queue messages • Coupling Facility storage is limited – Balance between number of messages and size of messages • Version 5.2 – Non-persistent messages < 63KB only • Version 5.3 – Persistent messages also supported, but still limited to < 63KB • Version 6.0 – Large messages supported up to 100MB – Messages > 63KB offloaded to DB2 as BLOBs • Version 7.1 – Support for Shared Message Data Sets (VSAM) • Alternative to DB2 for large messages – much better performance – Custom offload rules • zEC12 with Flash Express – SCM storage can also be exploited to increase CF capacity
  9. 9. Large shared-queue messages (SMDS) QM1 Ptr to 100K message QM1 SMDS QM2 SMD S QM2 APP MQPUT APP MQGET 1 2 3 4 Shared queue V7.1 • One SMDS per queue manager per CF structure • Each queue manager only writes large messages to its own SMDS • Queue managers can get messages from any queue manager’s SMDS
  10. 10. SMDS performance • Tests show comparable CPU savings, making SMDS a more usable feature for managing your CF storage • SMDS per CF structure provides better scaling than DB2 BLOB storage 1 2 3 4 5 6 7 8 9 10 0 50 100 150 200 250 300 350 400 3 LPAR Test - DB2 64KB Non-Persistent Messages In-Syncpoint - DB2 NP SIS Scaling – 3 qmgr NP SIS Scaling – 6 qmgr NP SIS Scaling – 9 qmgr Queue Pairs Transactions/Second 1 2 3 4 5 6 7 8 9 10 0 1000 2000 3000 4000 5000 6000 7000 3 LPAR Test - SMDS 64KB Non-Persistent Messages In-Syncpoint - SMDS NP SIS Scaling – 3 qmgr NP SIS Scaling – 6 qmgr NP SIS Scaling – 9 qmgr Queue Pairs Transactions/Second
  11. 11. MQ Queue Manager Clusters
  12. 12. Goals of Clustering • Multiple Queues with single image • Failure isolation • Scalable throughput • MQI applications to exploit clusters transparently • Definition through usage (MQOPEN) • MQGET always local
  13. 13. The purpose of clustering • Simplified administration – Large WMQ networks require many object definitions • Channels • Transmit queues • Remote queues • Workload balancing – Spread the load – Route around failures • Flexible connectivity – Overlapping clusters – Gateway Queue managers • Pub/sub Clusters
  14. 14. Simplified administration • Large WMQ networks require many object definitions • Manually defining the network – For each Queue Manager you connect to:- • Transmission Queue • Sender Channel • Remote Queue (Optional) – so we won’t count it – And a single generic Receiver Channel • Cluster network – Cluster-Sender channel to two full repositories – Cluster-Receiver channel as the model back to me. Number of QMgrs 2 3 4 5 6 7 8 9 10 20 50 Manually Defined Objects 6 15 28 45 66 92 120 153 190 380 4950 Objects for Cluster 4 6 8 10 12 14 16 18 20 40 100 #Objects = 2 x #QMgrs #Objects = ((#QMgrs-1) * 2 +1) * #Qmgrs Which is:- #Objects = 2 x #Qmgrs2 - #Qmgrs
  15. 15. Split cluster transmit queue • Much requested feature for various reasons… • Separation of Message Traffic – With a single transmission queue there is potential for pending messages for cluster channel 'A' to interfere with messages pending for cluster channel 'B' • Management of messages – Use of queue concepts such as MAXDEPTH not useful when using a single transmission queue for more than one channel. • Monitoring – Tracking the number of messages processed by a cluster channel currently difficult/impossible using queue monitoring (some information available via Channel Status). • Not about performance... V8
  16. 16. Split cluster transmit queue - automatic • New Queue Manager attribute which effects all cluster-sdr channels on the queue manager – ALTER QMGR DEFCLXQ( SCTQ | CHANNEL ) • Queue manager will automatically define a PERMANENT-DYNAMIC queue for each CLUSSDR channel. – Dynamic queues based upon new model queue “SYSTEM.CLUSTER.TRANSMIT.MODEL” – Well known queue names: “SYSTEM.CLUSTER.TRANSMIT.<CHANNEL-NAME>” • Authority checks at MQOPEN of a cluster queue will still be made against the SYSTEM.CLUSTER.TRANSMIT.QUEUE even if CLUSSDR is selected.
  17. 17. Splitting out the S.C.T.Q. per channel Q1 QM_B Q2 QM_C CLUSTER1 QM_A ..B ..C
  18. 18. Split cluster transmit queue - manual • Administrator manually defines a transmission queue and using a new queue attribute defines the CLUSSDR channel(s) which will use this queue as their transmission queue. – DEFINE QLOCAL(APPQMGR.CLUSTER1.XMITQ) CLCHNAME(CLUSTER1.TO.APPQMGR) USAGE(XMITQ) • The CLCHNAME can include a wild-card at the start or end of to allow a single queue to be used for multiple channels. In this example, assuming a naming convention where channel names all start with the name of the cluster, all channels for CLUSTER1 use the transmission queue CLUSTER1.XMITQ. – DEFINE QLOCAL(CLUSTER1.XMITQ) CLCHNAME(CLUSTER1.*) USAGE(XMITQ) – Multiple queues can be defined to cover all, or a subset of the cluster channels. • Can also be combined with the automatic option – Manual queue definition takes precedence.
  19. 19. Splitting out by cluster (or application) Cluster 2 Q1 Q2 QM_B Cluster 1 QM_A ..1.B ..2.B CLUSTER1.QM_B
  20. 20. Channel Technologies
  21. 21. Normal Channels • Channel name – match both ends – Manual definition • Sender – MQGET from XmitQ • Sync state written to SyncQ • Receiver – MQPUT to app queues • Sync state written to SyncQ QM1 (Local) QM2 (Remote) MCA MCA Channel Network Transmission Queue MQ Application Application Queues Message Flow SyncQSyncQ
  22. 22. Shared Channels • Inbound Channel – Port disposition determines channel disposition QSG Queue Manager QM3 Queue Manager QM1 Queue Manager QM2 Coupling Facility SyncQ SyncQ SP SP LPLP G e n e r I c CHLDISP(SHARED) CHLDISP(PRIVATE) SyncQ SyncQ SP LP
  23. 23. Shared Channels • Inbound Channel – Port disposition determines channel disposition • Outbound Channel – XmitQ disposition determines channel disposition QSG Queue Manager QM3 Queue Manager QM1 Queue Manager QM2 Coupling Facility XmitQ SyncQ XmitQ SyncQ CHLDISP(PRIVATE) CHLDISP(SHARED) SyncQ SyncQ
  24. 24. Cluster Channels • Channel Definition – Bootstrap CLUSSDR (point to repos) – Manual CLUSRCVR (defines model back to this queue manager) – Automatic CLUSSDR (for routes to all other queue managers) • Transmission Queue – SYSTEM.CLUSTER.TRANSMIT.QUEUE – With Split Cluster XmitQ Feature could be other • Synchronisation – As for normal channels – BATCHHB can reduce likelihood of ending INDOUBT(YES) • MQGET off XmitQ – Uses CorrelId of Channel Name QM1 (Local) MCA Channel Network Cluster Transmission Queue Message Flow SyncQ DEFINE CHANNEL(TO.QM1) CHLTYPE(CLUSRCVR) TRPTYPE(TCP) CONNAME(‘QM1.mach.ibm.com’) CLUSTER(DEMOCLUS) MQGET By CorrelId
  25. 25. Comparison of Channel Technologies Shared Cluster Normal Shared Synchronisation State    Can deliver messages to shared queues    High Availability for connections    High Availability for delivery of new & in-transit messages    Definition administration Group Auto Manual • Channel Definition in place – Manually defined, or automatically installed • Transmission Queue identified – Named on definition, or pick up cluster one – QSGDISP(PRIVATE) or QSGDISP(SHARED) • Synchronisation Queue identified – QSGDISP(PRIVATE) or QSGDISP(SHARED) • MQGET on XmitQ qualifier – Channel name for cluster channel • Now all sending channels work the same • Target CONNAME and Port resolved – Specific queue manager or Generic Port • Synchronisation Queue identified – QSGDISP(PRIVATE) or QSGDISP(SHARED) • Now all receiving channels work the same
  26. 26. Best Practice Scenario Example
  27. 27. Scenario Combining clusters and queue-sharing groups… QM_GW_01 MQ Clients TRANSFER.REQU EST.QA.01 (QALIAS) TRANSFER.REPLY.01 (QLOCAL, cluster) QM_CB_01 MQ on z/OS CICS1 TRANS CKTI MQGET wait and LOOP MQPUT1 MQPUT MQGET wait CICS on z/OSMQ on AIX QM_GW_02 MQ Clients TRANSFER.REQU EST.QA.01 (QALIAS) TRANSFER.REPLY.01 (QLOCAL, cluster) QM_CB_02 CICS2 TRANS CKTI MQGET wait and LOOP MQPUT1 MQPUT MQGET wait TRANSFER.REQUEST.01 (shared, cluster) Cluster workload balancing MQ Cluster backend processing backend processing TRANSFER.REQUEST.I NIT.QUEUE TRANSFER.REQUEST.I NIT.QUEUE Application operation Queue manager operation For triggering (optional)
  28. 28. Notices and Disclaimers 53 Copyright © 2016 by International Business Machines Corporation (IBM). No part of this document may be reproduced or transmitted in any form without written permission from IBM. U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM. Information in these presentations (including information relating to products that have not yet been announced by IBM) has been reviewed for accuracy as of the date of initial publication and could include unintentional technical or typographical errors. IBM shall have no responsibility to update this information. THIS DOCUMENT IS DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED. IN NO EVENT SHALL IBM BE LIABLE FOR ANY DAMAGE ARISING FROM THE USE OF THIS INFORMATION, INCLUDING BUT NOT LIMITED TO, LOSS OF DATA, BUSINESS INTERRUPTION, LOSS OF PROFIT OR LOSS OF OPPORTUNITY. IBM products and services are warranted according to the terms and conditions of the agreements under which they are provided. Any statements regarding IBM's future direction, intent or product plans are subject to change or withdrawal without notice. Performance data contained herein was generally obtained in a controlled, isolated environments. Customer examples are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual performance, cost, savings or other results in other operating environments may vary. References in this document to IBM products, programs, or services does not imply that IBM intends to make such products, programs or services available in all countries in which IBM operates or does business. Workshops, sessions and associated materials may have been prepared by independent session speakers, and do not necessarily reflect the views of IBM. All materials and discussions are provided for informational purposes only, and are neither intended to, nor shall constitute legal or other guidance or advice to any individual participant or their specific situation. It is the customer’s responsibility to insure its own compliance with legal requirements and to obtain advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulatory requirements that may affect the customer’s business and any actions the customer may need to take to comply with such laws. IBM does not provide legal advice or represent or warrant that its services or products will ensure that the customer is in compliance with any law
  29. 29. Notices and Disclaimers Con’t. 54 Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products in connection with this publication and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. IBM does not warrant the quality of any third-party products, or the ability of any such third-party products to interoperate with IBM’s products. IBM EXPRESSLY DISCLAIMS ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. The provision of the information contained h erein is not intended to, and does not, grant any right or license under any IBM patents, copyrights, trademarks or other intellectual property right. IBM, the IBM logo, ibm.com, Aspera®, Bluemix, Blueworks Live, CICS, Clearcase, Cognos®, DOORS®, Emptoris®, Enterprise Document Management System™, FASP®, FileNet®, Global Business Services ®, Global Technology Services ®, IBM ExperienceOne™, IBM SmartCloud®, IBM Social Business®, Information on Demand, ILOG, Maximo®, MQIntegrator®, MQSeries®, Netcool®, OMEGAMON, OpenPower, PureAnalytics™, PureApplication®, pureCluster™, PureCoverage®, PureData®, PureExperience®, PureFlex®, pureQuery®, pureScale®, PureSystems®, QRadar®, Rational®, Rhapsody®, Smarter Commerce®, SoDA, SPSS, Sterling Commerce®, StoredIQ, Tealeaf®, Tivoli®, Trusteer®, Unica®, urban{code}®, Watson, WebSphere®, Worklight®, X-Force® and System z® Z/OS, are trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at: www.ibm.com/legal/copytrade.shtml.
  30. 30. Thank You Your Feedback is Important! Access the InterConnect 2016 Conference Attendee Portal to complete your session surveys from your smartphone, laptop or conference kiosk.

we review clustering and shared queue technologies, their differences and synergies, as a foundation for building a highly available messaging service with resilience during both planned and unplanned outages of z Systems components.

Views

Total views

900

On Slideshare

0

From embeds

0

Number of embeds

5

Actions

Downloads

39

Shares

0

Comments

0

Likes

0

×