• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
IBM WebSphere MQ: Managing Workloads, Scaling and Availability with MQ Clusters (Impact 2014)
 

IBM WebSphere MQ: Managing Workloads, Scaling and Availability with MQ Clusters (Impact 2014)

on

  • 221 views

Impact 2014, session 1872: IBM WebSphere MQ Clustering can be used to solve many problems, from simplified administration and workload management in an MQ network, to horizontal scalability and ...

Impact 2014, session 1872: IBM WebSphere MQ Clustering can be used to solve many problems, from simplified administration and workload management in an MQ network, to horizontal scalability and continuous availability of messaging applications. This session will show the full range of uses of MQ Clusters to solve real problems, highlighting the underlying technology being used.

Statistics

Views

Total Views
221
Views on SlideShare
221
Embed Views
0

Actions

Likes
2
Downloads
26
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Blue sessions show those which are not repeated <br />

IBM WebSphere MQ: Managing Workloads, Scaling and Availability with MQ Clusters (Impact 2014) IBM WebSphere MQ: Managing Workloads, Scaling and Availability with MQ Clusters (Impact 2014) Presentation Transcript

  • © 2014 IBM Corporation IBM WebSphere MQ® 1872: Managing Workloads, Scaling and Availability with MQ Clusters David Ware
  • 2 Agenda Scenario 1 My first cluster • Starting to scale Scenario 2 Service availability • Routing around failures Scenario 3 Location dependency • Active active scenarios with ties to home Scenario 4 Avoiding interference • What are we sharing? Scenario 5 When things go wrong • Levels of DR and clustering implications
  • 5 My First Cluster 5
  • 6 Where it all begins… QMgrClient 1 Service 1 QMgr
  • 7 Where it all begins… QMgr QMgr Client 1 Service 1
  • 8 Client 1 Over time… Service 1 Client 2 Client 2 Client 3 Service 3 Service 2 Service 1 QMgrQMgr QMgrQMgr Client 1Client 1
  • 9 Over time... App 1 Service 1 Client 2 Client 2 Client 3 Service 2 App 1Client 1 Service 1 QMgr QMgr QMgr QMgr Client 3 Client 1 Client 4 App 4App 4Client 4 Service 4 Service 3 Service 1Service 1 Service 3
  • 10 Until... App 1 Service 1 Client 2 Client 2 Client 3 Service 2 App 1Client 1 Service 1 QMgr QMgr QMgr QMgr Client 3 Client 1 Client 4 App 4App 4Client 4 Service 4 Service 3 Service 1Service 1 Service 3 QMgr QMgr
  • 12 Zooming in on an application… Service 1 Client 2 Client 2 Client 3 Service 2 App 1App 1Client 1 Service 1 QMgr QMgr QMgr QMgr Client 3 Client 1 Client 4 App 4App 4Client 4 Service 4 Service 3 Service 1Service 1 Service 3 QMgr QMgr
  • 13 Starting to scale horizontally… • Workload Balancing • Service Availability • Location Transparency (of a kind) Service 1 App 1App 1Client 1 Service 1 QMgr QMgr QMgr
  • 15 Availability
  • 16 • Target queues • Transmission queues Service 1 App 1App 1Client 1 Service 1 QMgr QMgr QMgr Where can messages get stuck? Target queue Transmission queue
  • 17 Service 1 App 1App 1Client 1 Service 1 QMgr QMgr QMgr The service queue manager/host fails Message reallocation Unbound messages on the transmission queue can be diverted QMgr Locked messages Messages on the failed queue manager are locked until it is restarted Restart the queue manager Use multi-instance queue managers or HA clusters to automatically restart a queue manager Reconnect the service Make sure the service is restarted/reconnects to the restarted queue manager When a queue manager fails: • Ensure inbound messages are not locked to it • Restart it to release queued messages
  • 19 Service application availability 19
  • 20 QMgr QMgr QMgr • Cluster workload balancing does take into account the availability of receiving applications. • Or a build up of messages. Service 1 App 1App 1Client 1 Service 1 The service application fails Blissful ignorance This queue manager is unaware of the failure to one of the service instances Unserviced messages Half the messages will quickly start to build up on the service queue
  • 21 Service 1 App 1App 1Client 1 Service 1 QMgr QMgr QMgr Monitoring for service failures
  • 22 • WebSphere MQ provides a sample monitoring service • Regularly checks for attached consuming applications • Suitable for steady state service applications Service 1 App 1App 1Client 1 Service 1 QMgr QMgr QMgr Monitoring for service failures QMgr QMgr Moving messages Any messages that slipped through will be transferred to an active instance of the queue Detecting a change When a change to the open handles is detected the cluster workload balancing state is modified Sending queue managers Newly sent messages will be sent to active instances of the queue
  • 25 Client failures
  • 26 • Multiple locations for a client to connect to •Allows new requests when one queue manager is unavailable. • What happens to replies after a failure? Service 1 App 1App 1Client 1 Service 1 Client availability QMgr QMgr QMgrQMgr
  • 27 Service 1 App 1App 1Client 1 Service 1 Client host failure with an in flight request/response QMgr QMgr QMgr QMgr • Reply messages are bound to the originating queue manager, with no ability to redirect. Reply message bound The reply message will be locked to that outbound queue manager Request message Typically a request message will fill in the reply ReplyToQmgr based on the outbound queue manager
  • 28 Service 1 App 1App 1Client 1 Service 1 Client host failure with an in flight request/response QMgr QMgr QMgr QMgr • Reply-to queue aliases and reply-to queue manager aliases can be used to blank out the outbound resolution of the ReplyToQMgr field. • Typically, under normal running, you require the originating queue manager to receive the replies, cluster workload balancing configuration can provide this. DEF QLOCAL(REPLYQ) CLUSTER(CLUSTER1) DEF QREMOTE(REPLYQALIAS) RNAME(REPLYQ) RQMNAME(DUMMY) Name resolution Outgoing request resolves the ReplyToQ to be ‘REPLYQ’ and ReplyToQMgr to be ‘DUMMY’ DEF QREMOTE(DUMMY) RNAME(‘ ’) RQMNAME(‘ ’) Replying application Application replies to ‘REPLYQ’ on queue manager ‘DUMMY’ Name resolution Target queue manager ‘DUMMY’ is resolved to ‘ ’, allowing cluster resolution to occur DEF QLOCAL(REPLYQ) CLUSTER(CLUSTER1) DEF QREMOTE(REPLYQALIAS) RNAME(REPLYQ) RQMNAME(DUMMY) Requesting application Request message sets ReplyToQ to be ‘REPLYQALIAS’ and ReplyToQMgr to ‘ ’
  • 30 Location Dependency 30
  • 31 Global applications QMgr QMgr QMgr QMgr Service Service Service Service QMgr QMgr App 1App 1Client QMgr QMgr App 1App 1Client New York London but separated by an ocean and 3500 miles • Prefer traffic to stay geographically local • Except when you have to look further afield • Clusters can be used to span geographies
  • 33 Setting this up
  • 34 One cluster QMgr Service QMgr App 1App 1Client New York London • Clients always open AppQ • Local alias determines the preferred region • Cluster workload priority is used to target geographically local cluster aliases • Use of CLWLPRTY enables automatic failover •CLWLRANK can be used for manual failover Service App 1App 1Client DEF QALIAS(AppQ) TARGET(NYQ) DEF QALIAS(NYQ) TARGET(ReqQ) CLUSTER(Global) CLWLPRTY(9) AppQ NYQ ReqQ A A QMgr AppQ A LonQ A QMgr NYQ ReqQ A LonQ A DEF QALIAS(AppQ) TARGET(LonQ) DEF QALIAS(LonQ) TARGET(ReqQ) CLUSTER(Global) CLWLPRTY(4) DEF QALIAS(LonQ) TARGET(ReqQ) CLUSTER(Global) CLWLPRTY(9) DEF QALIAS(NYQ) TARGET(ReqQ) CLUSTER(Global) CLWLPRTY(4)
  • 35 QMgr QMgr QMgr QMgr Service Service Service Service QMgr QMgr App 1App 1Client QMgr QMgr App 1App 1Client New York London • The service queue managers join both geographical clusters •Each with separate cluster receivers for each cluster, at different cluster priorities. •Queues are clustered in both clusters. • The client queue managers are in their local cluster only. USA EUROPE QMgr QMgr QMgr QMgr The two cluster alternative
  • 37 Avoiding interference 37
  • 38 App 1App 1Client ServiceService App 1App 1Client ServiceService App 1App 1Client ServiceService App 1App 1Client App 1App 1Client ServiceService Real time queries Big data transfer Audit events The cluster as a pipe • Often a WebSphere MQ backbone will be used for multiple types of traffic
  • 39 App 1App 1Client ServiceService App 1App 1Client ServiceService App 1App 1Client ServiceService App 1App 1Client App 1App 1Client ServiceService The cluster as a pipe QMgr QMgr QMgr QMgr QMgr QMgr Channels • Often a WebSphere MQ backbone will be used for multiple types of traffic • When using a single cluster and the same queue managers, messages all share the same channels • Even multiple cluster receiver channels in the same cluster will not separate out the different traffic types
  • 41 App 1App 1Client ServiceService App 1App 1Client ServiceService App 1App 1Client ServiceService App 1App 1Client App 1App 1Client ServiceService The cluster as a pipe QMgr QMgr QMgr QMgr QMgr QMgr Cluster Cluster Cluster Channels Channels Channels • Often a WebSphere MQ backbone will be used for multiple types of traffic • When using a single cluster and the same queue managers, messages all share the same channels • Even multiple cluster receiver channels in the same cluster will not separate out the different traffic types • Multiple overlaid clusters with different channels enable separation
  • 43 QMgr QMgrQMgr Workload balancing level interference Service 1 Client 1 Service 1 QMgr Service 2 Client 2 Cluster workload balancing is at the channel level. • Messages sharing the same channels, but to different target queues will be counted together. The two channels here have an even 50/50 split of messages… …but the two instances of Service 1 do not! Split Service 1 and Service 2 queues out into separate clusters, queue managers or customise workload balancing logic. x75 x100 x50 x75 x25 x50 Multiple applications sharing the same queue managers and the same cluster channels. x75
  • 44 QMgr A much requested feature… • Multiple cluster transmission queues Separation of Message Traffic • With a single transmission queue there is potential for pending messages for cluster ChannelA to interfere with messages pending for cluster ChannelB Management of messages • Use of queue concepts such as MAXDEPTH not useful when using a single transmission queue for more than one channel. Monitoring • Tracking the number of messages processed by a cluster channel currently difficult/impossible using queue. Often requested, but not necessarily for, performance • In reality a shared transmission queue is not always the bottleneck, often other solutions to improving channel throughput (e.g. multiple cluster receiver channels) are really what’s needed. Multiple cluster transmit queues QMgr QMgr QMgr QMgr V7.5 V8 Distributed z/OS
  • 45 Configured on the sending queue manager, not the owners of the cluster receiver channel definitions. Queue Manager switch to automatically create a dynamic transmission queue per cluster sender channel. ALTER QMGR DEFCLXQ( SCTQ | CHANNEL ) Dynamic queues based upon model queue. SYSTEM.CLUSTER.TRANSMIT.MODEL Well known queue names. SYSTEM.CLUSTER.TRANSMIT.<CHANNEL-NAME> Multiple cluster transmit queues Automatic QMgr QMgr QMgr ChlA ChlB SYSTEM.CLUSTER.TRANSMIT.ChlA SYSTEM.CLUSTER.TRANSMIT.ChlC SYSTEM.CLUSTER.TRANSMIT.ChlB ChlA ChlC ChlC ChlB
  • 46 QMgr QMgr QMgr Still configured on the sending queue manager, not the owners of the cluster receiver channel definitions. Administratively define a transmission queue and configure which cluster sender channels will use this transmission queue. DEFINE QLOCAL(CLUSTER1.XMITQ) CLCHNAME(GREEN.*) USAGE(XMITQ) • Set a channel name pattern in CLCHNAME • Single/multiple channels (wildcard) – E.g. all channels for a specific cluster (assuming a suitable channel naming convention!) Any cluster sender channel not covered by a manual transmission queue defaults to the DEFCLXQ behaviour Multiple cluster transmit queues Manual Green.A Pink.B Pink.A GREEN.XMITQ Green.A Pink.A Pink.B PINK.XMITQ
  • 47 When things go wrong Disaster Recovery 47
  • 48 Everything Applications must be able to connect and target the exact same MQ resources. Every existing, in-flight, message must be processed whilst in DR mode. New messages must be processed whilst in DR mode. What do we need to recover? WebSphere MQ resources Applications must be able to connect and target the exact same MQ resources. New messages must be processed whilst in DR mode. Application availability Applications must be able to connect and target equivalent MQ resources. New messages must be processed whilst in DR mode.
  • 50 Datacenter Replication synchronous Synchronous Replication Datacenter 1 1 QMgr 2 QMgr DB DB Datacenter 2 QMgr QMgrQMgr DB DB QMgr QMgr QMgr QMgr Little need to replicate each full repository, just keep the two apart Outside the datacenter, queue managers must be locatable in either site • IP switching layer or comma separated channel names QMgr 1 2 Recovering everything
  • 52 Datacenter Replication asynchronous Asynchronous Replication Datacenter 1 1 QMgr 2 QMgr Datacenter 2 QMgr QMgr QMgr QMgr QMgr QMgr Backup queue managers contain historic messages (if any). Cluster state will also be historic. • Refreshing cluster state on failover and failback is essential. 1 2 QMgr REFRESH REFRESH Recovering MQ resources
  • 54 No Replication warm standby Datacenter 1 Datacenter 2 QMgr QMgr QMgr QMgr Backup queue managers are always running. Cluster workload balancing used to direct traffic to live queue managers. Messages will be trapped on live queue managers in the event of a failure. Applications must be designed to accommodate this configuration. QMgr QMgr 1 QMgr 2 QMgr 3 QMgr 4 QMgr 3 QMgr 4 Application availability
  • 56 Everything Live, cross center, WebSphere MQ clustering Duplicated queue manager configuration Restore from stale backup Restore nearly live backup Live disk replication Asynchronous replication synchronous replication active/active COST How much do we need to recover? WebSphere MQ resources Application availability REFRESH
  • 57 Questions? Thank you
  • 58 Monday Tuesday Wednesday Thursday 09:0 0 1868: IBM WS MQ: Disaster Recovery 1870: IBM WS MQ: Using Pub/Sub in an MQ Network 1924: Roundtable: IBM WS MQ Light in a IBM WS MQ Infrastructure 10:3 0 1880: Secure Messages with IBM WS MQ Advanced Message Security 1866: Introduction to IBM Messaging Capabilities 2640: MQ Telemetry Transport (MQTT) Details 3134: Meet the Experts: IBM MessageSight 1924: Roundtable: IBM WS MQ Light in a IBM WS MQ Infrastructure 1896: How to Develop Responsive Applications with IBM WS MQ Light 1885: IBM WS MQ on z/OS & Distributed Platforms - Are They Like Oil & Water 1917: Hands-on Lab: Developing a First Application with IBM WS MQ Light 1294: Enable Active-Active Business Through Enterprise High-availability Messaging Technology 1873: IBM WS MQ Security: Latest Features Deep Dive 12:0 0 LUNCH LUNCH LUNCH LUNCH 3420: Managing What Needs to Be Managed - It Shouldn’t Matter Where It Is 13:0 0 1227: How Not to Run a Certificate Management Center of Excellence 1869: IBM WS MQ: Using the Publish Subscribe Messaging Paradigm 1863: What's New in IBM Messaging 1872: IBM WS MQ: Managing Workloads, Scaling & Availability with MQ Clusters 1879: Using IBM WS MQ in Managed File Transfer Environments 1922: Roundtable: IBM Messaging Feedback 1883: Where Is the Message - Analyze IBM WS MQ Recovery Logs, Trace Routes, & Look In Applications 14:1 5 1800: IBM WS MQ for z/OS & CICS: Workload Balancing in a Plexed World 1863: What's New in IBM Messaging 1924: Roundtable: IBM WS MQ Light in a IBM WS MQ Infrastructure 1876: IBM WS MQ for z/OS: Latest Features Deep Dive 1229: The IBM WS MQ Toolbox: 20 Scripts, One-liners, & Utilities for UNIX & Windows 1804: Mean Time to Innocence: IBM WS MQ Problem Determination 16:0 0 1877: IBM WS MQ for z/OS: Performance & Accounting 1894: What Is IBM WS MQ Light & Why It Matters 15:45 1882: Building a Scalable & Continuously Avail-able IBM WS MQ Infrastructure with Travelport 1720: Building a Managed File Transfer Service at ADP 1897: Messaging in the Cloud with IBM WS MQ Light & IBM BlueMix 1916:Hands-onLab:IBMWSMQ 3133: Meet the Experts: IBM Messaging 1874: IBM WS MQ: Performance & Internals Deep Dive (Not zOS) 1866: Introduction to IBM Messaging Capabilities 2454: Using IBMs Managed File Transfer Portfolio to Maximize Data Effectiveness 17:1 5 1881: Using IBM WS MQ with IBM WAS & Liberty Profile 1873: IBM WS MQ Security: Latest Features Deep Dive 17:00 1878: IBM WS MQ for zOS: Shared Queues 1922: Roundtable: IBM Messaging Feedback 1867: IBM WS MQ: High Availability 1922: Roundtable: IBM Messaging Feedback 2646: Discover the Value IBM Software Delivers On Top of Open Source
  • 59 We Value Your Feedback Don’t forget to submit your Impact session and speaker feedback! Your feedback is very important to us – we use it to continually improve the conference. Use the Conference Mobile App or the online Agenda Builder to quickly submit your survey • Navigate to “Surveys” to see a view of surveys for sessions you’ve attended 59
  • © 2014 IBM Corporation Thank You
  • 61 Legal Disclaimer • © IBM Corporation 2014. All Rights Reserved. • The information contained in this publication is provided for informational purposes only. While efforts were made to verify the completeness and accuracy of the information contained in this publication, it is provided AS IS without warranty of any kind, express or implied. In addition, this information is based on IBM’s current product plans and strategy, which are subject to change by IBM without notice. IBM shall not be responsible for any damages arising out of the use of, or otherwise related to, this publication or any other materials. Nothing contained in this publication is intended to, nor shall have the effect of, creating any warranties or representations from IBM or its suppliers or licensors, or altering the terms and conditions of the applicable license agreement governing the use of IBM software. • References in this presentation to IBM products, programs, or services do not imply that they will be available in all countries in which IBM operates. Product release dates and/or capabilities referenced in this presentation may change at any time at IBM’s sole discretion based on market opportunities or other factors, and are not intended to be a commitment to future product or feature availability in any way. Nothing contained in these materials is intended to, nor shall have the effect of, stating or implying that any activities undertaken by you will result in any specific sales, revenue growth or other results. • All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer.