SlideShare a Scribd company logo
1 of 59
Download to read offline
© 2014 IBM Corporation
IBM Software Group WebSphere Software
Managing Workloads, Scaling
and Availability with MQ Clusters
Adrian Dick
adrian.dick@uk.ibm.com
2
WebSphere MQ
IBM Software Group | WebSphere software
Agenda
Scenario 1 My first cluster
 Starting to scale
Scenario 2 Service availability
 Routing around failures
Scenario 3 Location dependency
 Active active scenarios with ties to home
Scenario 4 Avoiding interference
 What are we sharing?
Scenario 5 When things go wrong
 Levels of DR and clustering implications
3
WebSphere MQ
IBM Software Group | WebSphere software
Introduction
 This session approaches the topic from a point of view of a set of common clustering
scenarios or ‘use cases’.
 We will build up from a fairly simple and common initial clustering setup to tackle some of
the more complicated issues which often come up in evolving clustered environments.
 Although not looking at the topic from a list of features or “how do I use widget x”
approach, as we work through the examples we will see where some of the recent
additions to WebSphere MQ’s clustering capabilities are relevant to these everyday
problems.
4
WebSphere MQ
IBM Software Group | WebSphere software
Terminology / Key
Lots of different terminology floats around when you get into application
infrastructure discussions… Clients, applications, servers, services, requesters,
responders… For the purposes of this session:
Client – a ‘client’ in the general sense (whether connected
locally or remotely), uses WebSphere MQ to send one off
datagrams or initiates requests to services and waits for
replies.
Service – a process which consumes messages and takes
some action, often needing to reply to the requesting Client.
Queue Manager
Note that there may be more than one Instance of a client or service, either
connected to a given queue manager or the infrastructure as a whole.
A set of clients and services working together to achieve some useful end goal make
up an ‘application’.
Service
Client
QMgr
Full repository
Queue Manager
QMgr
5
WebSphere MQ
IBM Software Group | WebSphere software
My First Cluster
5
6
WebSphere MQ
IBM Software Group | WebSphere software
Where it all begins…
QMgrClient 1
Service 1
QMgr
7
WebSphere MQ
IBM Software Group | WebSphere software
Where it all begins…
QMgr QMgr
Client 1
Service 1
8
WebSphere MQ
IBM Software Group | WebSphere software
Client 1
Over time…
Service 1
Client 2
Client 2
Client 3
Service 3
Service 2 Service 1
QMgrQMgr QMgrQMgr
Client 1Client 1
9
WebSphere MQ
IBM Software Group | WebSphere software
Over time...
App 1
Service 1
Client 2
Client 2
Client 3
Service 2
App 1Client 1
Service 1
QMgr
QMgr
QMgr
QMgr
Client 3
Client 1
Client 4
App 4App 4Client 4
Service 4
Service 3
Service 1Service 1
Service 3
10
WebSphere MQ
IBM Software Group | WebSphere software
Until...
App 1
Service 1
Client 2
Client 2
Client 3
Service 2
App 1Client 1
Service 1
QMgr
QMgr
QMgr
QMgr
Client 3
Client 1
Client 4
App 4App 4Client 4
Service 4
Service 3
Service 1Service 1
Service 3
QMgr QMgr
11
WebSphere MQ
IBM Software Group | WebSphere software
Basic Cluster
 This illustrates the first reason we may move to a WebSphere MQ cluster –
simplified administration. Channel definitions - and, if we choose, the
addressing of individual queues within the cluster - are tasks which no longer
have to be manually carried out by the administrator.
 At this point we still only have single instances of ‘service’ queues.
 A degree of vertical scaling can be achieved by adding instances of the Service
processes connecting to the single queue manager hosting a particular queue, if
the application can cope with ‘interleaved’ processing.
12
WebSphere MQ
IBM Software Group | WebSphere software
Service 1
Client 2
Client 2
Client 3
Service 2
App 1App 1Client 1
Service 1
QMgr
QMgr
QMgr
QMgr
Client 3
Client 1
Client 4
App 4App 4Client 4
Service 4
Service 3
Service 1Service 1
Service 3
QMgr QMgr
Zooming in on an application…
13
WebSphere MQ
IBM Software Group | WebSphere software
Starting to scale horizontally…
• Workload Balancing
• Service Availability
• Location Transparency (of a kind)
Service 1
App 1App 1Client 1
Service 1
QMgr
QMgr
QMgr
14
WebSphere MQ
IBM Software Group | WebSphere software
Starting to scale Horizontally
 By adding instances of ‘service’ queues, we can start to scale applications
across multiple hosts and beyond the ‘vertical’ scaling on a single (albeit maybe
multi-CPU) system.
 Using WebSphere MQ Clustering allows instances to be added and removed
without modifying the client or server applications
 But may require some thinking ahead to design with this in mind – e.g. avoiding hard coded
queue manager names
 As soon as we have multiple instances, questions arise about ‘choosing’ an
instance for a request, so as a natural consequence workload balancing
becomes available
 Location transparency for ‘the service’ has been achieved at this point, but there
is still a strong coupling between ‘an instance’ and a particular queue manager –
for example for reply routing (see next section)
15
WebSphere MQ
IBM Software Group | WebSphere software
Availability
16
WebSphere MQ
IBM Software Group | WebSphere software
• Target queues
• Transmission queues
Service 1
App 1App 1Client 1
Service 1
QMgr
QMgr
QMgr
Where can messages get stuck?
Target queue
Transmission queue
17
WebSphere MQ
IBM Software Group | WebSphere software
Service 1
App 1App 1Client 1
Service 1
QMgr
QMgr
QMgr
The service queue manager/host fails
Message reallocation
Unbound messages on the
transmission queue can be
diverted
QMgr
Locked messages
Messages on the failed queue manager are
locked until it is restarted
Restart the queue
manager
Use multi-instance queue
managers or HA clusters to
automatically restart a queue
manager
Reconnect the service
Make sure the service is
restarted/reconnects to the
restarted queue manager
When a queue manager fails:
• Ensure inbound messages are not locked to it
• Restart it to release queued messages
18
WebSphere MQ
IBM Software Group | WebSphere software
Failed Service Queue Manager
 When a ‘service’ queue manager fails, request messages which have reached
that queue manager or responses on transmission queues are inevitably lost
until that queue manager can be restarted.
 However – service availability can be maintained by making sure that there is
sufficient capacity in other hosts to cope with all requests being loaded onto
them.
 This will be smoother and give higher availability if client applications can be
designed to avoid server affinity and strict message ordering requirements –
BIND_NOT_FIXED. Reallocation will then mean that even in-flight requests can
be re-routed.
 To avoid the trapped request problem, consider HA clustering technology or
multi-instance queue managers.
19
WebSphere MQ
IBM Software Group | WebSphere software
Service application availability
19
20
WebSphere MQ
IBM Software Group | WebSphere software
QMgr
QMgr
QMgr
• Cluster workload balancing does take into account the availability of
receiving applications.
• Or a build up of messages.
Service 1
App 1App 1Client 1
Service 1
The service application fails
Blissful ignorance
This queue manager is unaware of
the failure to one of the service
instances
Unserviced messages
Half the messages will quickly start
to build up on the service queue
21
WebSphere MQ
IBM Software Group | WebSphere software
Service 1
App 1App 1Client 1
Service 1
QMgr
QMgr
QMgr
Monitoring for
service failures
22
WebSphere MQ
IBM Software Group | WebSphere software
• WebSphere MQ provides a sample monitoring service
• Regularly checks for attached consuming applications
• Suitable for steady state service applications
Service 1
App 1App 1Client 1
Service 1
QMgr
QMgr
QMgr
Monitoring for
service failures
QMgr
QMgr
Moving messages
Any messages that slipped
through will be transferred to an
active instance of the queue
Detecting a change
When a change to the open
handles is detected the cluster
workload balancing state is
modified
Sending queue managers
Newly sent messages will be sent to
active instances of the queue
23
WebSphere MQ
IBM Software Group | WebSphere software
Cluster Queue Monitoring Sample
 A relatively new tool, amqsclm, is provided since 7.0.1.8 to ensure messages are directed
towards the instances of clustered queues that have consuming applications currently attached.
This allows all messages to be processed effectively even when a system is asymmetrical (i.e.
consumers not attached everywhere).
 In addition it will move already queued messages from instances of the queue where no
consumers are attached to instances of the queue with consumers. This removes the chance of
long term marooned messages when consuming applications disconnect.
 The above allows for more versatility in the use of clustered queue topologies where
applications are not under the direct control of the queue managers. It also gives a greater
degree of high availability in the processing of messages.
 The tool provides a monitoring executable to run against each queue manager in the cluster
hosting queues, monitoring the queues and reacting accordingly.
 The tool is provided as source (amqsclm.c sample) to allow the user to understand the mechanics of the
tool and customise where needed.
24
WebSphere MQ
IBM Software Group | WebSphere software
AMQSCLM Logic
 Based on the existing MQ cluster workload balancing mechanics:
 Uses cluster priority of individual queues – all else being equal, preferring to send messages to
instances of queues with the highest cluster priority (CLWLPRTY).
 Using CLWLPRTY always allows messages to be put to a queue instance, even when no consumers
are attached to any instance.
 Changes to a queue’s cluster configuration are automatically propagated to all queue managers in the
cluster that are workload balancing messages to that queue.
 Single executable, set to run against each queue manager with one or
more cluster queues to be monitored.
 The monitoring process polls the state of the queues on a defined
interval:
 If no consumers are attached:
 CLWLPRTY of the queue is set to zero (if not already set).
 The cluster is queried for any active (positive cluster priority) queues.
 If they exist, any queued messages on this queue are got/put to the same queue.
Cluster workload balancing will re-route the messages to the active instance(s) of
the queue in the cluster.
 If consumers are attached:
 CLWLPRTY of the queue is set to one (if not already set).
 Defining the tool as a queue manager service will ensure it is started
with each queue manager
25
WebSphere MQ
IBM Software Group | WebSphere software
Client failures
26
WebSphere MQ
IBM Software Group | WebSphere software
• Multiple locations for a client to connect to
•Allows new requests when one queue manager is unavailable.
• What happens to replies after a failure?
Service 1
App 1App 1Client 1
Service 1
Client availability
QMgr
QMgr
QMgrQMgr
27
WebSphere MQ
IBM Software Group | WebSphere software
Service 1
App 1App 1Client 1
Service 1
Client host failure with an in flight request/response
QMgr
QMgr
QMgr
QMgr
• Reply messages are bound to the originating queue manager, with no ability to
redirect.
Reply message bound
The reply message will be locked to that
outbound queue manager
Request message
Typically a request message will fill in
the reply ReplyToQmgr based on the
outbound queue manager
28
WebSphere MQ
IBM Software Group | WebSphere software
Service 1
App 1App 1Client 1
Service 1
Client host failure with an in flight request/response
QMgr
QMgr
QMgr
QMgr
• Reply-to queue aliases and reply-to queue manager aliases can be used to blank out
the outbound resolution of the ReplyToQMgr field.
• Typically, under normal running, you require the originating queue manager to receive the
replies, cluster workload balancing configuration can provide this.
DEF QLOCAL(REPLYQ)
CLUSTER(CLUSTER1)
DEF QREMOTE(REPLYQALIAS)
RNAME(REPLYQ)
RQMNAME(DUMMY)
Name resolution
Outgoing request resolves the ReplyToQ
to be ‘REPLYQ’ and ReplyToQMgr to be
‘DUMMY’
DEF QREMOTE(DUMMY)
RNAME(‘ ’)
RQMNAME(‘ ’)
Replying application
Application replies to ‘REPLYQ’ on
queue manager ‘DUMMY’
Name resolution
Target queue manager ‘DUMMY’ is
resolved to ‘ ’, allowing cluster
resolution to occur
DEF QLOCAL(REPLYQ)
CLUSTER(CLUSTER1)
DEF QREMOTE(REPLYQALIAS)
RNAME(REPLYQ)
RQMNAME(DUMMY)
Requesting application
Request message sets ReplyToQ
to be ‘REPLYQALIAS’ and
ReplyToQMgr to ‘ ’
29
WebSphere MQ
IBM Software Group | WebSphere software
Failed ‘client’ queue manager Traditional applications will use ReplyToQMgr which has been set on outgoing request.
So may need to consider ReplyToQueueAlias to route response through workload
balancing.
 Managing reconnection beyond scope of this session, and in ideal world will reconnect to
same queue manager (may involve HA clusters, multi instance queue managers),
however…
 Clustered reply queues give various possibilities. Simplest case is ‘shared responses’
but not really worth discussing further… lets assume need to get back to particular client
‘instance’.
1) Can use priority to prefer ‘usual’ location. Using some form of polling perhaps, ensure
client connects / reconnects to particular queue manager whenever it is up. If down,
client and replies fail over to backup.
2) OR: can use AMQSCLM again to get replies to follow connection
30
WebSphere MQ
IBM Software Group | WebSphere software
Location Dependency
30
31
WebSphere MQ
IBM Software Group | WebSphere software
Global applications QMgr QMgr
QMgr QMgr
Service Service
Service Service
QMgr
QMgr
App 1App 1Client
QMgr
QMgr
App 1App 1Client
New York
London
but separated by an ocean and 3500 miles
• Prefer traffic to stay geographically local
• Except when you have to look further afield
• Clusters can be used to span geographies
32
WebSphere MQ
IBM Software Group | WebSphere software
One cluster
 A single cluster is often the simplest and best approach even when large
distances are involved – for example, cluster certainly doesn’t have to be
limited to a particular datacenter.
 However, often for obvious reasons we would rather keep as much traffic as
possible ‘local’, and we would like to know that if we lose our connection to the
outside world for a period of time, things can keep running.
 Conversely though, if a particular service is down locally, we’d like to make use
of the remote instance (even if it may be a bit slower than usual).
 Finally, we’d like our applications to ‘look the same’ wherever we connect – the
deploying administrator might know this instance is running in London, but
does the application really have to be coded to cope with that?
33
WebSphere MQ
IBM Software Group | WebSphere software
Setting this up
34
WebSphere MQ
IBM Software Group | WebSphere software
One cluster QMgr
Service
QMgr
App 1App 1Client
New York
London
• Clients always open AppQ
• Local alias determines the preferred region
• Cluster workload priority is used to target geographically local cluster aliases
• Use of CLWLPRTY enables automatic failover
•CLWLRANK can be used for manual failover
Service
App 1App 1Client
DEF QALIAS(AppQ)
TARGET(NYQ)
DEF QALIAS(NYQ)
TARGET(ReqQ)
CLUSTER(Global)
CLWLPRTY(9)
AppQ
NYQ
ReqQ
A
A
QMgr
AppQ
A
LonQ
A
QMgr
NYQ
ReqQ
A
LonQ
A
DEF QALIAS(AppQ)
TARGET(LonQ)
DEF QALIAS(LonQ)
TARGET(ReqQ)
CLUSTER(Global)
CLWLPRTY(4)
DEF QALIAS(LonQ)
TARGET(ReqQ)
CLUSTER(Global)
CLWLPRTY(9)
DEF QALIAS(NYQ)
TARGET(ReqQ)
CLUSTER(Global)
CLWLPRTY(4)
35
WebSphere MQ
IBM Software Group | WebSphere software
QMgr QMgr
QMgr QMgr
Service Service
Service Service
QMgr
QMgr
App 1App 1Client
QMgr
QMgr
App 1App 1Client
New York
London
• The service queue managers join both geographical clusters
•Each with separate cluster receivers for each cluster, at different cluster priorities.
•Queues are clustered in both clusters.
• The client queue managers are in their local cluster only.
USA
EUROPE
QMgr
QMgr
QMgr
QMgr
The two cluster alternative
36
WebSphere MQ
IBM Software Group | WebSphere software
Two cluster approach
 Define two clusters.
 USA and EUROPE
 For service queue managers, define separate cluster receiver channels, one for each
cluster. Set the CLWLPRTY high on the one for the local cluster and low for the
remote one.
 For service queue managers in New York:
 DEFINE CHANNEL(USA.NYxx) CHLTYPE(CLUSRCVR) …. CLUSTER(USA) CLWLPRTY(9)
 DEFINE CHANNEL(EUROPE.NYxx) CHLTYPE(CLUSRCVR) …. CLUSTER(EUROPE) CLWLPRTY(4)
 For service queue managers in London:
 DEFINE CHANNEL(EUROPE.LONxx) CHLTYPE(CLUSRCVR) …. CLUSTER(EUROPE) CLWLPRTY(9)
 DEFINE CHANNEL(USA.LONxx) CHLTYPE(CLUSRCVR) …. CLUSTER(USA) CLWLPRTY(4)
 Define a namelist of each service queue manager that contains both clusters and use
this when clustering queues.
 DEFINE NAMELIST(GLOBAL) NAMES(USA,EUROPE)
 DEFINE QLOCAL(QUEUE1) CLUSNL(GLOBAL)
 Client queue managers only join the cluster that is local to them.
 The client queue managers will choose the instances of queues that are on queue
managers with the highest CLWLPRTY on the channel.
 For example, a queue manager in the EUROPE cluster will only see the EUROPE.* channels. So London
queue managers will have a CLWLPRTY of 9 and New York queue managers only 4, so preferring
London whilst it is available.
37
WebSphere MQ
IBM Software Group | WebSphere software
Avoiding interference
37
38
WebSphere MQ
IBM Software Group | WebSphere software
App 1App 1Client
ServiceService
App 1App 1Client
ServiceService
App 1App 1Client
ServiceService
App 1App 1Client
App 1App 1Client
ServiceService
Real time queries
Big data transfer
Audit events
The cluster as a pipe
• Often a WebSphere MQ backbone will be used for multiple types of traffic
39
WebSphere MQ
IBM Software Group | WebSphere software
App 1App 1Client
ServiceService
App 1App 1Client
ServiceService
App 1App 1Client
ServiceService
App 1App 1Client
App 1App 1Client
ServiceService
The cluster as a pipe
QMgr
QMgr
QMgr
QMgr
QMgr
QMgr
Channels
• Often a WebSphere MQ backbone will be used for multiple types of traffic
• When using a single cluster and the same queue managers, messages all
share the same channels
• Even multiple cluster receiver channels in the same cluster will not separate out
the different traffic types
40
WebSphere MQ
IBM Software Group | WebSphere software
 Mice and Elephants
Large non real time data is contending for resources with small ‘live’ request response
transactions.
with due attribution to T-Rob:
http://www.ibm.com/developerworks/websphere/techjournal/0804_mismes/0804_mismes.html
 All Workload Balancing at the messaging / channel level
No distinction between a request that needs a week of CPU at the other end, and one which
needs 1 ms.
 Pub Sub requires high ‘meshing’ – all queue managers aware of whole cluster
Potentially lots of channel work for hosts not interested in pub sub activity when superimposed
on existing cluster
 Denial of Service potential
One application out of control = full cluster Transmit queue until someone can manually
intervene
The cluster as a pipe - Problems
41
WebSphere MQ
IBM Software Group | WebSphere software
App 1App 1Client
ServiceService
App 1App 1Client
ServiceService
App 1App 1Client
ServiceService
App 1App 1Client
App 1App 1Client
ServiceService
The cluster as a pipe
QMgr
QMgr
QMgr
QMgr
QMgr
QMgr
Cluster
Cluster
Cluster
Channels
Channels
Channels
• Often a WebSphere MQ backbone will be used for multiple types of traffic
• When using a single cluster and the same queue managers, messages
all share the same channels
• Even multiple cluster receiver channels in the same cluster will not separate out
the different traffic types
• Multiple overlaid clusters with different channels enable separation
42
WebSphere MQ
IBM Software Group | WebSphere software
 Putting application in a separate cluster gives option of also giving it its own channel
 Applications with a need for a strictly controlled WLM ratio can be given their own
clusters for this reason. However, bear in mind cost of too many overlapping clusters
 RFE 23391
 In general, try and group applications with similar requirements rather than ending up
with channel for every application
Real time / Batch / Pub sub
 Applications don’t need to know which cluster their resources are in as long as
configuration is managed correctly on their behalf
 New in WebSphere MQ 7.1: Pub sub can be limited to specialised clusters / queue
managers using PSCLUS attribute
 New in WebSphere MQ 7.5: Channels for different clusters can also be separated at
transmission queue level
The cluster as a pipe
43
WebSphere MQ
IBM Software Group | WebSphere software
QMgr
QMgrQMgr
Workload balancing level interference
Service 1
Client 1
Service 1
QMgr
Service 2
Client 2
 Cluster workload balancing is at the channel level.
 Messages sharing the same channels, but to different target queues will be counted together.
 The two channels here have an even 50/50 split of messages…
 …but the two instances of Service 1 do not!
 Split Service 1 and Service 2 queues out into separate clusters, queue managers or
customise workload balancing logic.
x75
x100
x50
x75
x25
x50
 Multiple applications sharing the same
queue managers and the same
cluster channels.
x75
44
WebSphere MQ
IBM Software Group | WebSphere software QMgr
 A much requested feature…
 Multiple cluster transmission queues
 Separation of Message Traffic
 With a single transmission queue there is
potential for pending messages for cluster ChannelA to interfere
with messages pending for cluster ChannelB
 Management of messages
 Use of queue concepts such as MAXDEPTH not useful when using a single transmission queue for
more than one channel.
 Monitoring
 Tracking the number of messages processed by a cluster channel currently difficult/impossible using
queue.
 Often requested, but not necessarily for, performance
 In reality a shared transmission queue is not always the bottleneck, often other solutions to improving
channel throughput (e.g. multiple cluster receiver channels) are really what’s needed.
Multiple cluster transmit queues
QMgr
QMgr
QMgr
QMgr
V7.5 V8
Distributed z/OS
45
WebSphere MQ
IBM Software Group | WebSphere software
 Configured on the sending queue manager, not the owners of the cluster receiver channel
definitions.
 Queue Manager switch to automatically create a dynamic transmission queue per cluster
sender channel.
ALTER QMGR DEFCLXQ( SCTQ | CHANNEL )
 Dynamic queues based upon model queue. SYSTEM.CLUSTER.TRANSMIT.MODEL
 Well known queue names.
SYSTEM.CLUSTER.TRANSMIT.<CHANNEL-NAME>
Multiple cluster transmit queues - Automatic
QMgr QMgr
QMgr
ChlA
ChlB
SYSTEM.CLUSTER.TRANSMIT.ChlA
SYSTEM.CLUSTER.TRANSMIT.ChlC
SYSTEM.CLUSTER.TRANSMIT.ChlB
ChlA
ChlC
ChlC
ChlB
46
WebSphere MQ
IBM Software Group | WebSphere software
QMgr QMgr
QMgr
 Still configured on the sending queue manager, not the owners of the cluster receiver
channel definitions.
 Administratively define a transmission queue and configure which cluster sender
channels will use this transmission queue.
DEFINE QLOCAL(CLUSTER1.XMITQ) CLCHNAME(GREEN.*) USAGE(XMITQ)
 Set a channel name pattern in CLCHNAME
 Single/multiple channels (wildcard)
 E.g. all channels for a specific cluster
(assuming a suitable channel naming convention!)
 Any cluster sender channel not
covered by a manual transmission
queue defaults to the DEFCLXQ
behaviour
Multiple cluster transmit queues - Manual
Green.A
Pink.B
Pink.A
GREEN.XMITQ Green.A
Pink.A
Pink.B
PINK.XMITQ
47
WebSphere MQ
IBM Software Group | WebSphere software
When things go wrong
Disaster Recovery
47
48
WebSphere MQ
IBM Software Group | WebSphere software
Everything
Applications must be able to connect and target the exact same MQ resources.
Every existing, in-flight, message must be processed whilst in DR mode.
New messages must be processed whilst in DR mode.
What do we need to recover?
WebSphere MQ resources
Applications must be able to connect and target the exact same MQ resources.
New messages must be processed whilst in DR mode.
Application availability
Applications must be able to connect and target equivalent MQ resources.
New messages must be processed whilst in DR mode.
49
WebSphere MQ
IBM Software Group | WebSphere software
What to recover
 Before discussing Disaster recovery in a clustered environment, we need to think about
what we mean by disaster recovery even on a single queue manager basis.
 Many people will think of synchronous replication (using underlying host, disk replication
etc.) as the ‘gold standard’
 The only way we can really achieve guaranteed identical configuration OR data.
 Significant performance cost
 Impractical in many scenarios (e.g. distance between data centres)
 When moving beyond synchronous we have to consider whether it is only ‘configuration’
(queue, channel definitions, etc) or also data which we are trying to restore.
 Restoring data implies WebSphere MQ may be being questionably used as ‘system of record’ – possibly
revisit architecture?
 Application must be designed to deal with duplicate and/or lost messages
 In a cluster, the line between data and config is further blurred
 Cluster knowledge exists as state data on repository queue for example.
50
WebSphere MQ
IBM Software Group | WebSphere software
Synchronous Replication
Datacenter 1
1
QMgr
2
QMgr
DB
DB
Datacenter 2
QMgr QMgrQMgr
DB
DB
QMgr
QMgr
QMgr
QMgr
 Little need to replicate each full
repository, just keep the two apart
 Outside the datacenter, queue
managers must be locatable in either
site
 IP switching layer or comma separated channel names
QMgr
1 2
Recovering everything
Datacenter Replication
synchronous
51
WebSphere MQ
IBM Software Group | WebSphere software
Synchronous disk replication
 In either of these scenarios life is simple from a clustering perspective
 As long as all queue managers see the fail over instance as ‘in the same place’,
or understand 7.0.1 style connection names, no other consideration is required
 Some administrators will prefer to include full repositories in DR failover unit – no
strong requirement for this unless other factors apply
 As long as good availability expectation between the pair, can add a new one at leisure to
pick up the strain in event of real loss of one.
 May add significantly to performance cost of replication
52
WebSphere MQ
IBM Software Group | WebSphere software
Asynchronous Replication
Datacenter 1
1
QMgr
2
QMgr
Datacenter 2
QMgr QMgr
QMgr
QMgr
QMgr
QMgr
 Backup queue managers contain
historic messages (if any).
 Cluster state will also be historic.
 Refreshing cluster state on failover
and failback is essential.
1 2
QMgr
REFRESH
REFRESH
Recovering MQ resources
Datacenter Replication
asynchronous
53
WebSphere MQ
IBM Software Group | WebSphere software
Asynchronous replication
 Remember that cluster state will be persisted as ‘data’
 REFRESH mandatory on restore from backup, failover, or ‘fail back’ to live.
 Either a cloned queue manager or a backup can be made to work
 Our experience is that when things go wrong, a ‘true’ backup is easier to work with
 Same process then whether preserving some application data or not
 Most common problem – missed refresh when reverting to ‘Live’ – things may appear to work
for a while…
 IP address / conname can:
 Change and be ok once re-advertised (after a REFRESH)
 For a manually cloned queue manager this is probably the default
 Blank connames also make this easy
 Change but not actually need to re-advertise
 E.g. Comma separated list – still need the REFRESH step though
 Remain the same
 Assuming have capability to modify routing, or DNS conname used
 A new queue manager appearing with same name as an existing (possibly failed)
queue manager will be allowed in to take its place.
 Message AMQ9468 / CSQX468I added to warn when this occurs
54
WebSphere MQ
IBM Software Group | WebSphere software Datacenter 1
Datacenter 2
QMgr
QMgr
QMgr
QMgr
 Backup queue managers are always
running.
 Cluster workload balancing used to
direct traffic to live queue managers.
 Messages will be trapped on live queue
managers in the event of a failure.
 Applications must be designed to accommodate this configuration.
QMgr
QMgr
1
QMgr
2
QMgr
3
QMgr
4
QMgr
3
QMgr
4
Application availability
No Replication
warm standby
55
WebSphere MQ
IBM Software Group | WebSphere software
Warm standby
 This is the other ‘easy’ scenario, and the recommendation wherever possible
 Applications must be designed to have loose affinities to any particular queue
manager
 Put messages to clustered queues rather than Queue X @ QMgr Y
 CLWLPRTY or for manual control CLWLRANK allow smooth switching between
primary and secondary instances as required
 Data on failed queue managers will be trapped unless and until restarted
 Implies applications must be able to replay
 If this means duplication possible, restart procedures may need to clear out
 This is the same as the earlier ‘One cluster’ scenario
56
WebSphere MQ
IBM Software Group | WebSphere software
Everything
Live, cross center,
WebSphere MQ clustering
Duplicated queue manager
configuration
Restore from stale
backup
Restore nearly live
backup
Live disk replication
Asynchronous
replication
synchronous
replication
active/active
COST
How much do we need to recover?
WebSphere MQ resources
Application availability
REFRESH
57
WebSphere MQ
IBM Software Group | WebSphere software
Questions?
Thank you
© 2014 IBM Corporation
IBM Software Group WebSphere Software
Thank You
59
WebSphere MQ
IBM Software Group | WebSphere software
Legal Disclaimer
• © IBM Corporation 2014. All Rights Reserved.
• The information contained in this publication is provided for informational purposes only. While efforts were made to verify the completeness and accuracy of the information contained in this
publication, it is provided AS IS without warranty of any kind, express or implied. In addition, this information is based on IBM’s current product plans and strategy, which are subject to change by IBM
without notice. IBM shall not be responsible for any damages arising out of the use of, or otherwise related to, this publication or any other materials. Nothing contained in this publication is intended to,
nor shall have the effect of, creating any warranties or representations from IBM or its suppliers or licensors, or altering the terms and conditions of the applicable license agreement governing the use
of IBM software.
• References in this presentation to IBM products, programs, or services do not imply that they will be available in all countries in which IBM operates. Product release dates and/or capabilities
referenced in this presentation may change at any time at IBM’s sole discretion based on market opportunities or other factors, and are not intended to be a commitment to future product or feature
availability in any way. Nothing contained in these materials is intended to, nor shall have the effect of, stating or implying that any activities undertaken by you will result in any specific sales, revenue
growth or other results.
• All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance
characteristics may vary by customer.

More Related Content

What's hot

What's hot (20)

Building highly available architectures with WAS and MQ
Building highly available architectures with WAS and MQBuilding highly available architectures with WAS and MQ
Building highly available architectures with WAS and MQ
 
Hhm 3474 mq messaging technologies and support for high availability and acti...
Hhm 3474 mq messaging technologies and support for high availability and acti...Hhm 3474 mq messaging technologies and support for high availability and acti...
Hhm 3474 mq messaging technologies and support for high availability and acti...
 
Understanding mq deployment choices and use cases
Understanding mq deployment choices and use casesUnderstanding mq deployment choices and use cases
Understanding mq deployment choices and use cases
 
IBM Integration Bus & WebSphere MQ - High Availability & Disaster Recovery
IBM Integration Bus & WebSphere MQ - High Availability & Disaster RecoveryIBM Integration Bus & WebSphere MQ - High Availability & Disaster Recovery
IBM Integration Bus & WebSphere MQ - High Availability & Disaster Recovery
 
IBM WebSphere MQ: Managing Workloads, Scaling and Availability with MQ Clusters
IBM WebSphere MQ: Managing Workloads, Scaling and Availability with MQ ClustersIBM WebSphere MQ: Managing Workloads, Scaling and Availability with MQ Clusters
IBM WebSphere MQ: Managing Workloads, Scaling and Availability with MQ Clusters
 
InterConnect 2016: What's new in IBM MQ
InterConnect 2016: What's new in IBM MQInterConnect 2016: What's new in IBM MQ
InterConnect 2016: What's new in IBM MQ
 
IBM MQ - better application performance
IBM MQ - better application performanceIBM MQ - better application performance
IBM MQ - better application performance
 
IBM Integration Bus High Availability Overview
IBM Integration Bus High Availability OverviewIBM Integration Bus High Availability Overview
IBM Integration Bus High Availability Overview
 
MQ Appliance - Intro and 8.0.0.5 updates
MQ Appliance - Intro and 8.0.0.5 updatesMQ Appliance - Intro and 8.0.0.5 updates
MQ Appliance - Intro and 8.0.0.5 updates
 
Ame 4166 ibm mq appliance
Ame 4166 ibm mq applianceAme 4166 ibm mq appliance
Ame 4166 ibm mq appliance
 
Expanding your options with the MQ Appliance
Expanding your options with the MQ ApplianceExpanding your options with the MQ Appliance
Expanding your options with the MQ Appliance
 
IBM MQ Appliance - Administration simplified
IBM MQ Appliance - Administration simplifiedIBM MQ Appliance - Administration simplified
IBM MQ Appliance - Administration simplified
 
Expanding your options with the IBM MQ Appliance - IBM InterConnect 2016
Expanding your options with the IBM MQ Appliance - IBM InterConnect 2016Expanding your options with the IBM MQ Appliance - IBM InterConnect 2016
Expanding your options with the IBM MQ Appliance - IBM InterConnect 2016
 
IBM MQ: Managing Workloads, Scaling and Availability with MQ Clusters
IBM MQ: Managing Workloads, Scaling and Availability with MQ ClustersIBM MQ: Managing Workloads, Scaling and Availability with MQ Clusters
IBM MQ: Managing Workloads, Scaling and Availability with MQ Clusters
 
IBM MQ - Monitoring and Managing Hybrid Messaging Environments
IBM MQ - Monitoring and Managing Hybrid Messaging EnvironmentsIBM MQ - Monitoring and Managing Hybrid Messaging Environments
IBM MQ - Monitoring and Managing Hybrid Messaging Environments
 
HHM-3540: The IBM MQ Light API: From Developer Laptop to Enterprise Data Cen...
 HHM-3540: The IBM MQ Light API: From Developer Laptop to Enterprise Data Cen... HHM-3540: The IBM MQ Light API: From Developer Laptop to Enterprise Data Cen...
HHM-3540: The IBM MQ Light API: From Developer Laptop to Enterprise Data Cen...
 
IBM MQ High Availabillity and Disaster Recovery (2017 version)
IBM MQ High Availabillity and Disaster Recovery (2017 version)IBM MQ High Availabillity and Disaster Recovery (2017 version)
IBM MQ High Availabillity and Disaster Recovery (2017 version)
 
MQ Security Overview
MQ Security OverviewMQ Security Overview
MQ Security Overview
 
IBM MQ v8 and JMS 2.0
IBM MQ v8 and JMS 2.0IBM MQ v8 and JMS 2.0
IBM MQ v8 and JMS 2.0
 
IBM MQ Advanced - IBM InterConnect 2016
IBM MQ Advanced - IBM InterConnect 2016IBM MQ Advanced - IBM InterConnect 2016
IBM MQ Advanced - IBM InterConnect 2016
 

Similar to IBM Managing Workload Scalability with MQ Clusters

Impact 2009 1783 Achieving Availability With W A Sz User Experience
Impact 2009 1783  Achieving  Availability With  W A Sz   User ExperienceImpact 2009 1783  Achieving  Availability With  W A Sz   User Experience
Impact 2009 1783 Achieving Availability With W A Sz User Experience
Elena Nanos
 

Similar to IBM Managing Workload Scalability with MQ Clusters (20)

Planning for MQ in the cloud MQTC 2017
Planning for MQ in the cloud MQTC 2017Planning for MQ in the cloud MQTC 2017
Planning for MQ in the cloud MQTC 2017
 
Designing IBM MQ deployments for the cloud generation
Designing IBM MQ deployments for the cloud generationDesigning IBM MQ deployments for the cloud generation
Designing IBM MQ deployments for the cloud generation
 
Deep Dive on Microservices and Amazon ECS by Raul Frias, Solutions Architect,...
Deep Dive on Microservices and Amazon ECS by Raul Frias, Solutions Architect,...Deep Dive on Microservices and Amazon ECS by Raul Frias, Solutions Architect,...
Deep Dive on Microservices and Amazon ECS by Raul Frias, Solutions Architect,...
 
Running Microservices on Amazon ECS - AWS April 2016 Webinar Series
Running Microservices on Amazon ECS - AWS April 2016 Webinar SeriesRunning Microservices on Amazon ECS - AWS April 2016 Webinar Series
Running Microservices on Amazon ECS - AWS April 2016 Webinar Series
 
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with K...
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with K...Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with K...
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with K...
 
Fault tolerant and scalable ibm mq
Fault tolerant and scalable ibm mqFault tolerant and scalable ibm mq
Fault tolerant and scalable ibm mq
 
Impact 2009 1783 Achieving Availability With W A Sz User Experience
Impact 2009 1783  Achieving  Availability With  W A Sz   User ExperienceImpact 2009 1783  Achieving  Availability With  W A Sz   User Experience
Impact 2009 1783 Achieving Availability With W A Sz User Experience
 
RabbitMQ Status Quo Critical Review
RabbitMQ Status Quo Critical ReviewRabbitMQ Status Quo Critical Review
RabbitMQ Status Quo Critical Review
 
Arsitektur Aplikasi Modern - Faisal Henry Susanto
Arsitektur Aplikasi Modern - Faisal Henry SusantoArsitektur Aplikasi Modern - Faisal Henry Susanto
Arsitektur Aplikasi Modern - Faisal Henry Susanto
 
Deep Dive on Microservices and Amazon ECS
Deep Dive on Microservices and Amazon ECSDeep Dive on Microservices and Amazon ECS
Deep Dive on Microservices and Amazon ECS
 
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...
 
Mule soft mcia-level-1 Dumps
Mule soft mcia-level-1 DumpsMule soft mcia-level-1 Dumps
Mule soft mcia-level-1 Dumps
 
Microservices architecture
Microservices architectureMicroservices architecture
Microservices architecture
 
Kubecon 2023 EU - KServe - The State and Future of Cloud-Native Model Serving
Kubecon 2023 EU - KServe - The State and Future of Cloud-Native Model ServingKubecon 2023 EU - KServe - The State and Future of Cloud-Native Model Serving
Kubecon 2023 EU - KServe - The State and Future of Cloud-Native Model Serving
 
Distributed Services Scheduling and Cloud Provisioning
Distributed Services Scheduling and Cloud ProvisioningDistributed Services Scheduling and Cloud Provisioning
Distributed Services Scheduling and Cloud Provisioning
 
Deep Dive on Microservices and Amazon ECS
Deep Dive on Microservices and Amazon ECSDeep Dive on Microservices and Amazon ECS
Deep Dive on Microservices and Amazon ECS
 
Running IBM MQ in the Cloud
Running IBM MQ in the CloudRunning IBM MQ in the Cloud
Running IBM MQ in the Cloud
 
CloudStack DC Meetup - Apache CloudStack Overview and 4.1/4.2 Preview
CloudStack DC Meetup - Apache CloudStack Overview and 4.1/4.2 PreviewCloudStack DC Meetup - Apache CloudStack Overview and 4.1/4.2 Preview
CloudStack DC Meetup - Apache CloudStack Overview and 4.1/4.2 Preview
 
High-Speed Reactive Microservices - trials and tribulations
High-Speed Reactive Microservices - trials and tribulationsHigh-Speed Reactive Microservices - trials and tribulations
High-Speed Reactive Microservices - trials and tribulations
 
IBM MQ High Availability 2019
IBM MQ High Availability 2019IBM MQ High Availability 2019
IBM MQ High Availability 2019
 

More from IBM Systems UKI

IBM MQ Light Service for Bluemix
IBM MQ Light Service for BluemixIBM MQ Light Service for Bluemix
IBM MQ Light Service for Bluemix
IBM Systems UKI
 
IBM How to Develop Responsive Applications
IBM How to Develop Responsive ApplicationsIBM How to Develop Responsive Applications
IBM How to Develop Responsive Applications
IBM Systems UKI
 
IBM Latest Features Deep Dive
IBM Latest Features Deep DiveIBM Latest Features Deep Dive
IBM Latest Features Deep Dive
IBM Systems UKI
 
IBM Performance and Accounting
IBM Performance and AccountingIBM Performance and Accounting
IBM Performance and Accounting
IBM Systems UKI
 
IBM Intro to Using MQ with WAS and Liberty
IBM Intro to Using MQ with WAS and LibertyIBM Intro to Using MQ with WAS and Liberty
IBM Intro to Using MQ with WAS and Liberty
IBM Systems UKI
 
IBM Publish Subscribe in a Network
IBM Publish Subscribe in a NetworkIBM Publish Subscribe in a Network
IBM Publish Subscribe in a Network
IBM Systems UKI
 
IBM MQ Channel Authentication
IBM MQ Channel AuthenticationIBM MQ Channel Authentication
IBM MQ Channel Authentication
IBM Systems UKI
 
IBM MQ Security Deep Dive
IBM MQ Security Deep DiveIBM MQ Security Deep Dive
IBM MQ Security Deep Dive
IBM Systems UKI
 

More from IBM Systems UKI (20)

PureApp Hybrid Cloud Jonathan Langley Presentation 11th September 2014
PureApp Hybrid Cloud Jonathan Langley Presentation 11th September 2014PureApp Hybrid Cloud Jonathan Langley Presentation 11th September 2014
PureApp Hybrid Cloud Jonathan Langley Presentation 11th September 2014
 
PureApp Hybrid Cloud - Mark Willemse ING Presentation 11th September 2014
PureApp Hybrid Cloud - Mark Willemse ING Presentation 11th September 2014PureApp Hybrid Cloud - Mark Willemse ING Presentation 11th September 2014
PureApp Hybrid Cloud - Mark Willemse ING Presentation 11th September 2014
 
PureSystems Summary and Actions, John Kaemmerer and Gerry Novan, 11th Sept 14
PureSystems Summary and Actions, John Kaemmerer and Gerry Novan, 11th Sept 14PureSystems Summary and Actions, John Kaemmerer and Gerry Novan, 11th Sept 14
PureSystems Summary and Actions, John Kaemmerer and Gerry Novan, 11th Sept 14
 
PureSystems on the Private Cloud, John Kaemmerer and Gerry Novan, 11th Sept 14
PureSystems on the Private Cloud, John Kaemmerer and Gerry Novan, 11th Sept 14PureSystems on the Private Cloud, John Kaemmerer and Gerry Novan, 11th Sept 14
PureSystems on the Private Cloud, John Kaemmerer and Gerry Novan, 11th Sept 14
 
Pure Systems Patterns of Expertise - John Kaemmerer and Gerry Kovan, 11th Sep...
Pure Systems Patterns of Expertise - John Kaemmerer and Gerry Kovan, 11th Sep...Pure Systems Patterns of Expertise - John Kaemmerer and Gerry Kovan, 11th Sep...
Pure Systems Patterns of Expertise - John Kaemmerer and Gerry Kovan, 11th Sep...
 
Introduction to Pure, John Kaemmerer and Gerry Kovan
Introduction to Pure, John Kaemmerer and Gerry KovanIntroduction to Pure, John Kaemmerer and Gerry Kovan
Introduction to Pure, John Kaemmerer and Gerry Kovan
 
PureSystems on the Public Cloud John Kaemmerer and Gerry Novan, 11th Sep 14
PureSystems on the Public Cloud John Kaemmerer and Gerry Novan, 11th Sep 14PureSystems on the Public Cloud John Kaemmerer and Gerry Novan, 11th Sep 14
PureSystems on the Public Cloud John Kaemmerer and Gerry Novan, 11th Sep 14
 
Savio Rodrigues - Cloud Enable Your Ent App - 11th September
Savio Rodrigues - Cloud Enable Your Ent App - 11th SeptemberSavio Rodrigues - Cloud Enable Your Ent App - 11th September
Savio Rodrigues - Cloud Enable Your Ent App - 11th September
 
Andrew Darley Presentation - Pure Future is Hybrid Cloud (London 11th September)
Andrew Darley Presentation - Pure Future is Hybrid Cloud (London 11th September)Andrew Darley Presentation - Pure Future is Hybrid Cloud (London 11th September)
Andrew Darley Presentation - Pure Future is Hybrid Cloud (London 11th September)
 
Icon solutions presentation - Pure Hybrid Cloud Event, 11th September London
Icon solutions presentation - Pure Hybrid Cloud Event, 11th September LondonIcon solutions presentation - Pure Hybrid Cloud Event, 11th September London
Icon solutions presentation - Pure Hybrid Cloud Event, 11th September London
 
IBM MQ Light Service for Bluemix
IBM MQ Light Service for BluemixIBM MQ Light Service for Bluemix
IBM MQ Light Service for Bluemix
 
IBM How to Develop Responsive Applications
IBM How to Develop Responsive ApplicationsIBM How to Develop Responsive Applications
IBM How to Develop Responsive Applications
 
IBM Latest Features Deep Dive
IBM Latest Features Deep DiveIBM Latest Features Deep Dive
IBM Latest Features Deep Dive
 
IBM Performance and Accounting
IBM Performance and AccountingIBM Performance and Accounting
IBM Performance and Accounting
 
IBM Intro to Using MQ with WAS and Liberty
IBM Intro to Using MQ with WAS and LibertyIBM Intro to Using MQ with WAS and Liberty
IBM Intro to Using MQ with WAS and Liberty
 
IBM Publish Subscribe in a Network
IBM Publish Subscribe in a NetworkIBM Publish Subscribe in a Network
IBM Publish Subscribe in a Network
 
IBM MQ V8 annd JMS 2.0
IBM MQ V8 annd JMS 2.0IBM MQ V8 annd JMS 2.0
IBM MQ V8 annd JMS 2.0
 
IBM MQ Channel Authentication
IBM MQ Channel AuthenticationIBM MQ Channel Authentication
IBM MQ Channel Authentication
 
IBM MQ Security Deep Dive
IBM MQ Security Deep DiveIBM MQ Security Deep Dive
IBM MQ Security Deep Dive
 
IBM What's New in MQ V8
IBM What's New in MQ V8IBM What's New in MQ V8
IBM What's New in MQ V8
 

Recently uploaded

+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
Health
 
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
masabamasaba
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
masabamasaba
 

Recently uploaded (20)

Exploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdfExploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdf
 
SHRMPro HRMS Software Solutions Presentation
SHRMPro HRMS Software Solutions PresentationSHRMPro HRMS Software Solutions Presentation
SHRMPro HRMS Software Solutions Presentation
 
8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students
 
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
 
Announcing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK SoftwareAnnouncing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK Software
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
 
%in Harare+277-882-255-28 abortion pills for sale in Harare
%in Harare+277-882-255-28 abortion pills for sale in Harare%in Harare+277-882-255-28 abortion pills for sale in Harare
%in Harare+277-882-255-28 abortion pills for sale in Harare
 
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
 
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
 
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
 
Define the academic and professional writing..pdf
Define the academic and professional writing..pdfDefine the academic and professional writing..pdf
Define the academic and professional writing..pdf
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
 
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdfPayment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
 
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
 
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
 
%in Lydenburg+277-882-255-28 abortion pills for sale in Lydenburg
%in Lydenburg+277-882-255-28 abortion pills for sale in Lydenburg%in Lydenburg+277-882-255-28 abortion pills for sale in Lydenburg
%in Lydenburg+277-882-255-28 abortion pills for sale in Lydenburg
 

IBM Managing Workload Scalability with MQ Clusters

  • 1. © 2014 IBM Corporation IBM Software Group WebSphere Software Managing Workloads, Scaling and Availability with MQ Clusters Adrian Dick adrian.dick@uk.ibm.com
  • 2. 2 WebSphere MQ IBM Software Group | WebSphere software Agenda Scenario 1 My first cluster  Starting to scale Scenario 2 Service availability  Routing around failures Scenario 3 Location dependency  Active active scenarios with ties to home Scenario 4 Avoiding interference  What are we sharing? Scenario 5 When things go wrong  Levels of DR and clustering implications
  • 3. 3 WebSphere MQ IBM Software Group | WebSphere software Introduction  This session approaches the topic from a point of view of a set of common clustering scenarios or ‘use cases’.  We will build up from a fairly simple and common initial clustering setup to tackle some of the more complicated issues which often come up in evolving clustered environments.  Although not looking at the topic from a list of features or “how do I use widget x” approach, as we work through the examples we will see where some of the recent additions to WebSphere MQ’s clustering capabilities are relevant to these everyday problems.
  • 4. 4 WebSphere MQ IBM Software Group | WebSphere software Terminology / Key Lots of different terminology floats around when you get into application infrastructure discussions… Clients, applications, servers, services, requesters, responders… For the purposes of this session: Client – a ‘client’ in the general sense (whether connected locally or remotely), uses WebSphere MQ to send one off datagrams or initiates requests to services and waits for replies. Service – a process which consumes messages and takes some action, often needing to reply to the requesting Client. Queue Manager Note that there may be more than one Instance of a client or service, either connected to a given queue manager or the infrastructure as a whole. A set of clients and services working together to achieve some useful end goal make up an ‘application’. Service Client QMgr Full repository Queue Manager QMgr
  • 5. 5 WebSphere MQ IBM Software Group | WebSphere software My First Cluster 5
  • 6. 6 WebSphere MQ IBM Software Group | WebSphere software Where it all begins… QMgrClient 1 Service 1 QMgr
  • 7. 7 WebSphere MQ IBM Software Group | WebSphere software Where it all begins… QMgr QMgr Client 1 Service 1
  • 8. 8 WebSphere MQ IBM Software Group | WebSphere software Client 1 Over time… Service 1 Client 2 Client 2 Client 3 Service 3 Service 2 Service 1 QMgrQMgr QMgrQMgr Client 1Client 1
  • 9. 9 WebSphere MQ IBM Software Group | WebSphere software Over time... App 1 Service 1 Client 2 Client 2 Client 3 Service 2 App 1Client 1 Service 1 QMgr QMgr QMgr QMgr Client 3 Client 1 Client 4 App 4App 4Client 4 Service 4 Service 3 Service 1Service 1 Service 3
  • 10. 10 WebSphere MQ IBM Software Group | WebSphere software Until... App 1 Service 1 Client 2 Client 2 Client 3 Service 2 App 1Client 1 Service 1 QMgr QMgr QMgr QMgr Client 3 Client 1 Client 4 App 4App 4Client 4 Service 4 Service 3 Service 1Service 1 Service 3 QMgr QMgr
  • 11. 11 WebSphere MQ IBM Software Group | WebSphere software Basic Cluster  This illustrates the first reason we may move to a WebSphere MQ cluster – simplified administration. Channel definitions - and, if we choose, the addressing of individual queues within the cluster - are tasks which no longer have to be manually carried out by the administrator.  At this point we still only have single instances of ‘service’ queues.  A degree of vertical scaling can be achieved by adding instances of the Service processes connecting to the single queue manager hosting a particular queue, if the application can cope with ‘interleaved’ processing.
  • 12. 12 WebSphere MQ IBM Software Group | WebSphere software Service 1 Client 2 Client 2 Client 3 Service 2 App 1App 1Client 1 Service 1 QMgr QMgr QMgr QMgr Client 3 Client 1 Client 4 App 4App 4Client 4 Service 4 Service 3 Service 1Service 1 Service 3 QMgr QMgr Zooming in on an application…
  • 13. 13 WebSphere MQ IBM Software Group | WebSphere software Starting to scale horizontally… • Workload Balancing • Service Availability • Location Transparency (of a kind) Service 1 App 1App 1Client 1 Service 1 QMgr QMgr QMgr
  • 14. 14 WebSphere MQ IBM Software Group | WebSphere software Starting to scale Horizontally  By adding instances of ‘service’ queues, we can start to scale applications across multiple hosts and beyond the ‘vertical’ scaling on a single (albeit maybe multi-CPU) system.  Using WebSphere MQ Clustering allows instances to be added and removed without modifying the client or server applications  But may require some thinking ahead to design with this in mind – e.g. avoiding hard coded queue manager names  As soon as we have multiple instances, questions arise about ‘choosing’ an instance for a request, so as a natural consequence workload balancing becomes available  Location transparency for ‘the service’ has been achieved at this point, but there is still a strong coupling between ‘an instance’ and a particular queue manager – for example for reply routing (see next section)
  • 15. 15 WebSphere MQ IBM Software Group | WebSphere software Availability
  • 16. 16 WebSphere MQ IBM Software Group | WebSphere software • Target queues • Transmission queues Service 1 App 1App 1Client 1 Service 1 QMgr QMgr QMgr Where can messages get stuck? Target queue Transmission queue
  • 17. 17 WebSphere MQ IBM Software Group | WebSphere software Service 1 App 1App 1Client 1 Service 1 QMgr QMgr QMgr The service queue manager/host fails Message reallocation Unbound messages on the transmission queue can be diverted QMgr Locked messages Messages on the failed queue manager are locked until it is restarted Restart the queue manager Use multi-instance queue managers or HA clusters to automatically restart a queue manager Reconnect the service Make sure the service is restarted/reconnects to the restarted queue manager When a queue manager fails: • Ensure inbound messages are not locked to it • Restart it to release queued messages
  • 18. 18 WebSphere MQ IBM Software Group | WebSphere software Failed Service Queue Manager  When a ‘service’ queue manager fails, request messages which have reached that queue manager or responses on transmission queues are inevitably lost until that queue manager can be restarted.  However – service availability can be maintained by making sure that there is sufficient capacity in other hosts to cope with all requests being loaded onto them.  This will be smoother and give higher availability if client applications can be designed to avoid server affinity and strict message ordering requirements – BIND_NOT_FIXED. Reallocation will then mean that even in-flight requests can be re-routed.  To avoid the trapped request problem, consider HA clustering technology or multi-instance queue managers.
  • 19. 19 WebSphere MQ IBM Software Group | WebSphere software Service application availability 19
  • 20. 20 WebSphere MQ IBM Software Group | WebSphere software QMgr QMgr QMgr • Cluster workload balancing does take into account the availability of receiving applications. • Or a build up of messages. Service 1 App 1App 1Client 1 Service 1 The service application fails Blissful ignorance This queue manager is unaware of the failure to one of the service instances Unserviced messages Half the messages will quickly start to build up on the service queue
  • 21. 21 WebSphere MQ IBM Software Group | WebSphere software Service 1 App 1App 1Client 1 Service 1 QMgr QMgr QMgr Monitoring for service failures
  • 22. 22 WebSphere MQ IBM Software Group | WebSphere software • WebSphere MQ provides a sample monitoring service • Regularly checks for attached consuming applications • Suitable for steady state service applications Service 1 App 1App 1Client 1 Service 1 QMgr QMgr QMgr Monitoring for service failures QMgr QMgr Moving messages Any messages that slipped through will be transferred to an active instance of the queue Detecting a change When a change to the open handles is detected the cluster workload balancing state is modified Sending queue managers Newly sent messages will be sent to active instances of the queue
  • 23. 23 WebSphere MQ IBM Software Group | WebSphere software Cluster Queue Monitoring Sample  A relatively new tool, amqsclm, is provided since 7.0.1.8 to ensure messages are directed towards the instances of clustered queues that have consuming applications currently attached. This allows all messages to be processed effectively even when a system is asymmetrical (i.e. consumers not attached everywhere).  In addition it will move already queued messages from instances of the queue where no consumers are attached to instances of the queue with consumers. This removes the chance of long term marooned messages when consuming applications disconnect.  The above allows for more versatility in the use of clustered queue topologies where applications are not under the direct control of the queue managers. It also gives a greater degree of high availability in the processing of messages.  The tool provides a monitoring executable to run against each queue manager in the cluster hosting queues, monitoring the queues and reacting accordingly.  The tool is provided as source (amqsclm.c sample) to allow the user to understand the mechanics of the tool and customise where needed.
  • 24. 24 WebSphere MQ IBM Software Group | WebSphere software AMQSCLM Logic  Based on the existing MQ cluster workload balancing mechanics:  Uses cluster priority of individual queues – all else being equal, preferring to send messages to instances of queues with the highest cluster priority (CLWLPRTY).  Using CLWLPRTY always allows messages to be put to a queue instance, even when no consumers are attached to any instance.  Changes to a queue’s cluster configuration are automatically propagated to all queue managers in the cluster that are workload balancing messages to that queue.  Single executable, set to run against each queue manager with one or more cluster queues to be monitored.  The monitoring process polls the state of the queues on a defined interval:  If no consumers are attached:  CLWLPRTY of the queue is set to zero (if not already set).  The cluster is queried for any active (positive cluster priority) queues.  If they exist, any queued messages on this queue are got/put to the same queue. Cluster workload balancing will re-route the messages to the active instance(s) of the queue in the cluster.  If consumers are attached:  CLWLPRTY of the queue is set to one (if not already set).  Defining the tool as a queue manager service will ensure it is started with each queue manager
  • 25. 25 WebSphere MQ IBM Software Group | WebSphere software Client failures
  • 26. 26 WebSphere MQ IBM Software Group | WebSphere software • Multiple locations for a client to connect to •Allows new requests when one queue manager is unavailable. • What happens to replies after a failure? Service 1 App 1App 1Client 1 Service 1 Client availability QMgr QMgr QMgrQMgr
  • 27. 27 WebSphere MQ IBM Software Group | WebSphere software Service 1 App 1App 1Client 1 Service 1 Client host failure with an in flight request/response QMgr QMgr QMgr QMgr • Reply messages are bound to the originating queue manager, with no ability to redirect. Reply message bound The reply message will be locked to that outbound queue manager Request message Typically a request message will fill in the reply ReplyToQmgr based on the outbound queue manager
  • 28. 28 WebSphere MQ IBM Software Group | WebSphere software Service 1 App 1App 1Client 1 Service 1 Client host failure with an in flight request/response QMgr QMgr QMgr QMgr • Reply-to queue aliases and reply-to queue manager aliases can be used to blank out the outbound resolution of the ReplyToQMgr field. • Typically, under normal running, you require the originating queue manager to receive the replies, cluster workload balancing configuration can provide this. DEF QLOCAL(REPLYQ) CLUSTER(CLUSTER1) DEF QREMOTE(REPLYQALIAS) RNAME(REPLYQ) RQMNAME(DUMMY) Name resolution Outgoing request resolves the ReplyToQ to be ‘REPLYQ’ and ReplyToQMgr to be ‘DUMMY’ DEF QREMOTE(DUMMY) RNAME(‘ ’) RQMNAME(‘ ’) Replying application Application replies to ‘REPLYQ’ on queue manager ‘DUMMY’ Name resolution Target queue manager ‘DUMMY’ is resolved to ‘ ’, allowing cluster resolution to occur DEF QLOCAL(REPLYQ) CLUSTER(CLUSTER1) DEF QREMOTE(REPLYQALIAS) RNAME(REPLYQ) RQMNAME(DUMMY) Requesting application Request message sets ReplyToQ to be ‘REPLYQALIAS’ and ReplyToQMgr to ‘ ’
  • 29. 29 WebSphere MQ IBM Software Group | WebSphere software Failed ‘client’ queue manager Traditional applications will use ReplyToQMgr which has been set on outgoing request. So may need to consider ReplyToQueueAlias to route response through workload balancing.  Managing reconnection beyond scope of this session, and in ideal world will reconnect to same queue manager (may involve HA clusters, multi instance queue managers), however…  Clustered reply queues give various possibilities. Simplest case is ‘shared responses’ but not really worth discussing further… lets assume need to get back to particular client ‘instance’. 1) Can use priority to prefer ‘usual’ location. Using some form of polling perhaps, ensure client connects / reconnects to particular queue manager whenever it is up. If down, client and replies fail over to backup. 2) OR: can use AMQSCLM again to get replies to follow connection
  • 30. 30 WebSphere MQ IBM Software Group | WebSphere software Location Dependency 30
  • 31. 31 WebSphere MQ IBM Software Group | WebSphere software Global applications QMgr QMgr QMgr QMgr Service Service Service Service QMgr QMgr App 1App 1Client QMgr QMgr App 1App 1Client New York London but separated by an ocean and 3500 miles • Prefer traffic to stay geographically local • Except when you have to look further afield • Clusters can be used to span geographies
  • 32. 32 WebSphere MQ IBM Software Group | WebSphere software One cluster  A single cluster is often the simplest and best approach even when large distances are involved – for example, cluster certainly doesn’t have to be limited to a particular datacenter.  However, often for obvious reasons we would rather keep as much traffic as possible ‘local’, and we would like to know that if we lose our connection to the outside world for a period of time, things can keep running.  Conversely though, if a particular service is down locally, we’d like to make use of the remote instance (even if it may be a bit slower than usual).  Finally, we’d like our applications to ‘look the same’ wherever we connect – the deploying administrator might know this instance is running in London, but does the application really have to be coded to cope with that?
  • 33. 33 WebSphere MQ IBM Software Group | WebSphere software Setting this up
  • 34. 34 WebSphere MQ IBM Software Group | WebSphere software One cluster QMgr Service QMgr App 1App 1Client New York London • Clients always open AppQ • Local alias determines the preferred region • Cluster workload priority is used to target geographically local cluster aliases • Use of CLWLPRTY enables automatic failover •CLWLRANK can be used for manual failover Service App 1App 1Client DEF QALIAS(AppQ) TARGET(NYQ) DEF QALIAS(NYQ) TARGET(ReqQ) CLUSTER(Global) CLWLPRTY(9) AppQ NYQ ReqQ A A QMgr AppQ A LonQ A QMgr NYQ ReqQ A LonQ A DEF QALIAS(AppQ) TARGET(LonQ) DEF QALIAS(LonQ) TARGET(ReqQ) CLUSTER(Global) CLWLPRTY(4) DEF QALIAS(LonQ) TARGET(ReqQ) CLUSTER(Global) CLWLPRTY(9) DEF QALIAS(NYQ) TARGET(ReqQ) CLUSTER(Global) CLWLPRTY(4)
  • 35. 35 WebSphere MQ IBM Software Group | WebSphere software QMgr QMgr QMgr QMgr Service Service Service Service QMgr QMgr App 1App 1Client QMgr QMgr App 1App 1Client New York London • The service queue managers join both geographical clusters •Each with separate cluster receivers for each cluster, at different cluster priorities. •Queues are clustered in both clusters. • The client queue managers are in their local cluster only. USA EUROPE QMgr QMgr QMgr QMgr The two cluster alternative
  • 36. 36 WebSphere MQ IBM Software Group | WebSphere software Two cluster approach  Define two clusters.  USA and EUROPE  For service queue managers, define separate cluster receiver channels, one for each cluster. Set the CLWLPRTY high on the one for the local cluster and low for the remote one.  For service queue managers in New York:  DEFINE CHANNEL(USA.NYxx) CHLTYPE(CLUSRCVR) …. CLUSTER(USA) CLWLPRTY(9)  DEFINE CHANNEL(EUROPE.NYxx) CHLTYPE(CLUSRCVR) …. CLUSTER(EUROPE) CLWLPRTY(4)  For service queue managers in London:  DEFINE CHANNEL(EUROPE.LONxx) CHLTYPE(CLUSRCVR) …. CLUSTER(EUROPE) CLWLPRTY(9)  DEFINE CHANNEL(USA.LONxx) CHLTYPE(CLUSRCVR) …. CLUSTER(USA) CLWLPRTY(4)  Define a namelist of each service queue manager that contains both clusters and use this when clustering queues.  DEFINE NAMELIST(GLOBAL) NAMES(USA,EUROPE)  DEFINE QLOCAL(QUEUE1) CLUSNL(GLOBAL)  Client queue managers only join the cluster that is local to them.  The client queue managers will choose the instances of queues that are on queue managers with the highest CLWLPRTY on the channel.  For example, a queue manager in the EUROPE cluster will only see the EUROPE.* channels. So London queue managers will have a CLWLPRTY of 9 and New York queue managers only 4, so preferring London whilst it is available.
  • 37. 37 WebSphere MQ IBM Software Group | WebSphere software Avoiding interference 37
  • 38. 38 WebSphere MQ IBM Software Group | WebSphere software App 1App 1Client ServiceService App 1App 1Client ServiceService App 1App 1Client ServiceService App 1App 1Client App 1App 1Client ServiceService Real time queries Big data transfer Audit events The cluster as a pipe • Often a WebSphere MQ backbone will be used for multiple types of traffic
  • 39. 39 WebSphere MQ IBM Software Group | WebSphere software App 1App 1Client ServiceService App 1App 1Client ServiceService App 1App 1Client ServiceService App 1App 1Client App 1App 1Client ServiceService The cluster as a pipe QMgr QMgr QMgr QMgr QMgr QMgr Channels • Often a WebSphere MQ backbone will be used for multiple types of traffic • When using a single cluster and the same queue managers, messages all share the same channels • Even multiple cluster receiver channels in the same cluster will not separate out the different traffic types
  • 40. 40 WebSphere MQ IBM Software Group | WebSphere software  Mice and Elephants Large non real time data is contending for resources with small ‘live’ request response transactions. with due attribution to T-Rob: http://www.ibm.com/developerworks/websphere/techjournal/0804_mismes/0804_mismes.html  All Workload Balancing at the messaging / channel level No distinction between a request that needs a week of CPU at the other end, and one which needs 1 ms.  Pub Sub requires high ‘meshing’ – all queue managers aware of whole cluster Potentially lots of channel work for hosts not interested in pub sub activity when superimposed on existing cluster  Denial of Service potential One application out of control = full cluster Transmit queue until someone can manually intervene The cluster as a pipe - Problems
  • 41. 41 WebSphere MQ IBM Software Group | WebSphere software App 1App 1Client ServiceService App 1App 1Client ServiceService App 1App 1Client ServiceService App 1App 1Client App 1App 1Client ServiceService The cluster as a pipe QMgr QMgr QMgr QMgr QMgr QMgr Cluster Cluster Cluster Channels Channels Channels • Often a WebSphere MQ backbone will be used for multiple types of traffic • When using a single cluster and the same queue managers, messages all share the same channels • Even multiple cluster receiver channels in the same cluster will not separate out the different traffic types • Multiple overlaid clusters with different channels enable separation
  • 42. 42 WebSphere MQ IBM Software Group | WebSphere software  Putting application in a separate cluster gives option of also giving it its own channel  Applications with a need for a strictly controlled WLM ratio can be given their own clusters for this reason. However, bear in mind cost of too many overlapping clusters  RFE 23391  In general, try and group applications with similar requirements rather than ending up with channel for every application Real time / Batch / Pub sub  Applications don’t need to know which cluster their resources are in as long as configuration is managed correctly on their behalf  New in WebSphere MQ 7.1: Pub sub can be limited to specialised clusters / queue managers using PSCLUS attribute  New in WebSphere MQ 7.5: Channels for different clusters can also be separated at transmission queue level The cluster as a pipe
  • 43. 43 WebSphere MQ IBM Software Group | WebSphere software QMgr QMgrQMgr Workload balancing level interference Service 1 Client 1 Service 1 QMgr Service 2 Client 2  Cluster workload balancing is at the channel level.  Messages sharing the same channels, but to different target queues will be counted together.  The two channels here have an even 50/50 split of messages…  …but the two instances of Service 1 do not!  Split Service 1 and Service 2 queues out into separate clusters, queue managers or customise workload balancing logic. x75 x100 x50 x75 x25 x50  Multiple applications sharing the same queue managers and the same cluster channels. x75
  • 44. 44 WebSphere MQ IBM Software Group | WebSphere software QMgr  A much requested feature…  Multiple cluster transmission queues  Separation of Message Traffic  With a single transmission queue there is potential for pending messages for cluster ChannelA to interfere with messages pending for cluster ChannelB  Management of messages  Use of queue concepts such as MAXDEPTH not useful when using a single transmission queue for more than one channel.  Monitoring  Tracking the number of messages processed by a cluster channel currently difficult/impossible using queue.  Often requested, but not necessarily for, performance  In reality a shared transmission queue is not always the bottleneck, often other solutions to improving channel throughput (e.g. multiple cluster receiver channels) are really what’s needed. Multiple cluster transmit queues QMgr QMgr QMgr QMgr V7.5 V8 Distributed z/OS
  • 45. 45 WebSphere MQ IBM Software Group | WebSphere software  Configured on the sending queue manager, not the owners of the cluster receiver channel definitions.  Queue Manager switch to automatically create a dynamic transmission queue per cluster sender channel. ALTER QMGR DEFCLXQ( SCTQ | CHANNEL )  Dynamic queues based upon model queue. SYSTEM.CLUSTER.TRANSMIT.MODEL  Well known queue names. SYSTEM.CLUSTER.TRANSMIT.<CHANNEL-NAME> Multiple cluster transmit queues - Automatic QMgr QMgr QMgr ChlA ChlB SYSTEM.CLUSTER.TRANSMIT.ChlA SYSTEM.CLUSTER.TRANSMIT.ChlC SYSTEM.CLUSTER.TRANSMIT.ChlB ChlA ChlC ChlC ChlB
  • 46. 46 WebSphere MQ IBM Software Group | WebSphere software QMgr QMgr QMgr  Still configured on the sending queue manager, not the owners of the cluster receiver channel definitions.  Administratively define a transmission queue and configure which cluster sender channels will use this transmission queue. DEFINE QLOCAL(CLUSTER1.XMITQ) CLCHNAME(GREEN.*) USAGE(XMITQ)  Set a channel name pattern in CLCHNAME  Single/multiple channels (wildcard)  E.g. all channels for a specific cluster (assuming a suitable channel naming convention!)  Any cluster sender channel not covered by a manual transmission queue defaults to the DEFCLXQ behaviour Multiple cluster transmit queues - Manual Green.A Pink.B Pink.A GREEN.XMITQ Green.A Pink.A Pink.B PINK.XMITQ
  • 47. 47 WebSphere MQ IBM Software Group | WebSphere software When things go wrong Disaster Recovery 47
  • 48. 48 WebSphere MQ IBM Software Group | WebSphere software Everything Applications must be able to connect and target the exact same MQ resources. Every existing, in-flight, message must be processed whilst in DR mode. New messages must be processed whilst in DR mode. What do we need to recover? WebSphere MQ resources Applications must be able to connect and target the exact same MQ resources. New messages must be processed whilst in DR mode. Application availability Applications must be able to connect and target equivalent MQ resources. New messages must be processed whilst in DR mode.
  • 49. 49 WebSphere MQ IBM Software Group | WebSphere software What to recover  Before discussing Disaster recovery in a clustered environment, we need to think about what we mean by disaster recovery even on a single queue manager basis.  Many people will think of synchronous replication (using underlying host, disk replication etc.) as the ‘gold standard’  The only way we can really achieve guaranteed identical configuration OR data.  Significant performance cost  Impractical in many scenarios (e.g. distance between data centres)  When moving beyond synchronous we have to consider whether it is only ‘configuration’ (queue, channel definitions, etc) or also data which we are trying to restore.  Restoring data implies WebSphere MQ may be being questionably used as ‘system of record’ – possibly revisit architecture?  Application must be designed to deal with duplicate and/or lost messages  In a cluster, the line between data and config is further blurred  Cluster knowledge exists as state data on repository queue for example.
  • 50. 50 WebSphere MQ IBM Software Group | WebSphere software Synchronous Replication Datacenter 1 1 QMgr 2 QMgr DB DB Datacenter 2 QMgr QMgrQMgr DB DB QMgr QMgr QMgr QMgr  Little need to replicate each full repository, just keep the two apart  Outside the datacenter, queue managers must be locatable in either site  IP switching layer or comma separated channel names QMgr 1 2 Recovering everything Datacenter Replication synchronous
  • 51. 51 WebSphere MQ IBM Software Group | WebSphere software Synchronous disk replication  In either of these scenarios life is simple from a clustering perspective  As long as all queue managers see the fail over instance as ‘in the same place’, or understand 7.0.1 style connection names, no other consideration is required  Some administrators will prefer to include full repositories in DR failover unit – no strong requirement for this unless other factors apply  As long as good availability expectation between the pair, can add a new one at leisure to pick up the strain in event of real loss of one.  May add significantly to performance cost of replication
  • 52. 52 WebSphere MQ IBM Software Group | WebSphere software Asynchronous Replication Datacenter 1 1 QMgr 2 QMgr Datacenter 2 QMgr QMgr QMgr QMgr QMgr QMgr  Backup queue managers contain historic messages (if any).  Cluster state will also be historic.  Refreshing cluster state on failover and failback is essential. 1 2 QMgr REFRESH REFRESH Recovering MQ resources Datacenter Replication asynchronous
  • 53. 53 WebSphere MQ IBM Software Group | WebSphere software Asynchronous replication  Remember that cluster state will be persisted as ‘data’  REFRESH mandatory on restore from backup, failover, or ‘fail back’ to live.  Either a cloned queue manager or a backup can be made to work  Our experience is that when things go wrong, a ‘true’ backup is easier to work with  Same process then whether preserving some application data or not  Most common problem – missed refresh when reverting to ‘Live’ – things may appear to work for a while…  IP address / conname can:  Change and be ok once re-advertised (after a REFRESH)  For a manually cloned queue manager this is probably the default  Blank connames also make this easy  Change but not actually need to re-advertise  E.g. Comma separated list – still need the REFRESH step though  Remain the same  Assuming have capability to modify routing, or DNS conname used  A new queue manager appearing with same name as an existing (possibly failed) queue manager will be allowed in to take its place.  Message AMQ9468 / CSQX468I added to warn when this occurs
  • 54. 54 WebSphere MQ IBM Software Group | WebSphere software Datacenter 1 Datacenter 2 QMgr QMgr QMgr QMgr  Backup queue managers are always running.  Cluster workload balancing used to direct traffic to live queue managers.  Messages will be trapped on live queue managers in the event of a failure.  Applications must be designed to accommodate this configuration. QMgr QMgr 1 QMgr 2 QMgr 3 QMgr 4 QMgr 3 QMgr 4 Application availability No Replication warm standby
  • 55. 55 WebSphere MQ IBM Software Group | WebSphere software Warm standby  This is the other ‘easy’ scenario, and the recommendation wherever possible  Applications must be designed to have loose affinities to any particular queue manager  Put messages to clustered queues rather than Queue X @ QMgr Y  CLWLPRTY or for manual control CLWLRANK allow smooth switching between primary and secondary instances as required  Data on failed queue managers will be trapped unless and until restarted  Implies applications must be able to replay  If this means duplication possible, restart procedures may need to clear out  This is the same as the earlier ‘One cluster’ scenario
  • 56. 56 WebSphere MQ IBM Software Group | WebSphere software Everything Live, cross center, WebSphere MQ clustering Duplicated queue manager configuration Restore from stale backup Restore nearly live backup Live disk replication Asynchronous replication synchronous replication active/active COST How much do we need to recover? WebSphere MQ resources Application availability REFRESH
  • 57. 57 WebSphere MQ IBM Software Group | WebSphere software Questions? Thank you
  • 58. © 2014 IBM Corporation IBM Software Group WebSphere Software Thank You
  • 59. 59 WebSphere MQ IBM Software Group | WebSphere software Legal Disclaimer • © IBM Corporation 2014. All Rights Reserved. • The information contained in this publication is provided for informational purposes only. While efforts were made to verify the completeness and accuracy of the information contained in this publication, it is provided AS IS without warranty of any kind, express or implied. In addition, this information is based on IBM’s current product plans and strategy, which are subject to change by IBM without notice. IBM shall not be responsible for any damages arising out of the use of, or otherwise related to, this publication or any other materials. Nothing contained in this publication is intended to, nor shall have the effect of, creating any warranties or representations from IBM or its suppliers or licensors, or altering the terms and conditions of the applicable license agreement governing the use of IBM software. • References in this presentation to IBM products, programs, or services do not imply that they will be available in all countries in which IBM operates. Product release dates and/or capabilities referenced in this presentation may change at any time at IBM’s sole discretion based on market opportunities or other factors, and are not intended to be a commitment to future product or feature availability in any way. Nothing contained in these materials is intended to, nor shall have the effect of, stating or implying that any activities undertaken by you will result in any specific sales, revenue growth or other results. • All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer.