Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
HHM-3416: IBM MQ High Availability
and Disaster Recovery
Mark Taylor
marke_taylor@uk.ibm.com
IBM Hursley
Please Note
IBM’s statements regarding its plans, directions, and intent are subject to change
or withdrawal without notic...
Abstract
• IBM MQ provides many capabilities that will keep your data safe and your
business running in the event of failu...
Introduction
• Availability is a very large subject
• You can have the best technology in the world, but you have to manag...
What is DR
• Getting applications running after a major (often whole-site) failure or loss
• It is not about High Availabi...
Disaster Recovery vs High Availability
• Designs for HA typically involve a single site for each component of the
overall ...
HIGH AVAILABILITY
Single Points of Failure
• With no redundancy or fault tolerance, a failure of any component can lead
to a loss of availab...
IBM MQ HA technologies
• Queue manager clusters
• Queue-sharing groups
• Support for networked storage
• Multi-instance qu...
Queue Manager Clusters
• Sharing cluster queues on
multiple queue managers
prevents a queue from being a
SPOF
• Cluster wo...
Queue-Sharing Groups
• On z/OS, queue managers can be
members of a queue-sharing group
• Shared queues are held in a coupl...
Introduction to Failover and MQ
• Failover is the automatic switching of availability of a service
‒ For MQ, the “service”...
Failover considerations
• Failover times are made up of three parts:
‒ Time taken to notice the failure
 Heartbeat missed...
MULTI-INSTANCE
QUEUE MANAGERS
Multi-instance Queue Managers
• Basic failover support without HA cluster
• Two instances of a queue manager on different ...
Multi-instance Queue Managers
1. Normal
execution
Owns the queue manager data
MQ
Client
Machine A Machine B
QM1
QM1
Active...
Multi-instance Queue Managers
2. Disaster
strikes
MQ
Client
Machine A Machine B
QM1
QM1
Active
instance
QM1
Standby
instan...
Multi-instance Queue Managers
3. FAILOVER
Standby
becomes
active
MQ
Client
Machine B
QM1
QM1
Active
instance
MQ
Client
net...
Multi-instance Queue Managers
4. Recovery
complete
MQ
Client
Machine B
QM1
QM1
Active
instance
MQ
Client
network
networked...
Dealing with multiple IP addresses
• The IP address of the queue manager changes when it moves
‒ So MQ channel configurati...
HA CLUSTERS
HA clusters
• MQ traditionally made highly available using an HA cluster
‒ IBM PowerHA for AIX (formerly HACMP), Veritas C...
HA clusters
• In HA clusters, queue manager data and logs are placed on a shared disk
‒ Disk is switched between machines ...
HA cluster
MQ in an HA cluster – Active/active
Normal
execution
MQ
Client
Machine A Machine B
QM1
Active
instance
MQ
Clien...
HA cluster
MQ in an HA cluster – Active/active
FAILOVER MQ
Client
Machine A Machine B
MQ
Client
network
168.0.0.1
QM2
Acti...
Multi-instance QM or HA cluster?
• Multi-instance queue manager
 Integrated into the MQ product
 Faster failover than HA...
Primary Secondary
High Availability for the MQ Appliance
• IBM MQ Appliances can be deployed in HA pairs
‒ Primary instanc...
Setting up HA for the Appliance
• The following command is run from appl1:
‒ prepareha -s <some random text> -a <address o...
Virtual Images
• Another mechanism being regularly used
• When MQ is in a virtual machine … simply shoot and restart the V...
MQ Virtual System Pattern for PureApplication System
• MQ Pattern for Pure supports multi-instance HA model
• Exploits GPF...
How it works
31
Single Rack Multi-Rack
Pattern Builder
32
Pattern Builder – Configuration model
33
Pattern Builder – Active QM
34
Pattern Builder – Standby QM
35
Shared Queues,
HP NonStop Server continuous continuous
MQ
Clusters none continuous
continuousautomatic
automatic automatic...
APPLICATIONS AND
AUTO-RECONNECTION
HA applications – MQ connectivity
• If an application loses connection to a queue manager, what does it do?
‒ End abnormal...
Automatic client reconnection
• MQ client automatically reconnects when connection broken
‒ MQI C clients and standalone J...
Automatic client reconnection
• Can register event handler to observe reconnection
• Not all MQI is seamless, but majority...
Automatic client reconnection
• Enabled in application code, ini file or CLNTCONN definition
‒ MQI: MQCNO_RECONNECT, MQCNO...
Client Configurations for Availability
• Use wildcarded queue manager names in CCDT
‒ Gets weighted distribution of connec...
Application Patterns for availability
• Article describing examples of how to build a hub topology supporting:
‒ Continuou...
Disaster Recovery
What makes a Queue Manager on Dist?
ini files
Registry
System
ini files
Registry
QMgr
Recovery
Logs
QFiles
/var/mqm/log/QM...
Backups
• At minimum, backup definitions at regular intervals
‒ Include ini files and security settings
• One view is ther...
What makes a Queue Manager on z/OS?
ACTIVE LOGSARCHIVED LOGS LOG
BSDS
VSAM
DS
HIGHEST RBA
CHKPT LIST
LOG INVENTORY
VSAM Li...
What makes up a Queue Manager?
• Queue manager started task procedure
‒ Specifies MQ libraries to use, location of BSDS an...
Backing Up a z/OS Queue Manager
• Keep copies of ZPARM, MSTR procedure, product datasets and
INP1/INP2 members
• Use dual ...
Remote Recovery
Topologies
• Sometimes a data centre is kept PURELY as the DR site
• Sometimes 2 data centres are in daily use; back each ...
Queue Manager Connections
• DR topologies have little difference for individual queue managers
• But they do affect overal...
Disk replication
• Disk replication can be used for MQ disaster recovery
• Either synchronous or asynchronous disk replica...
Primary Secondary
DR for the MQ Appliance
• IBM MQ Appliances can now be deployed with DR option
• Similar to HA design
‒ ...
Combining HA and DR
QM1
Machine A
Active
instance
Machine B
Standby
instance
shared storage
Machine C
Backup
instance
QM1R...
Combining HA and DR – “Active/Active”
QM1
Machine A
Active
instance
Machine B
Standby
instance
Machine C
Backup
instance Q...
Integration with other products
• May want to have consistency with other data resources
‒ For example, databases and app ...
Planning and Testing
Planning for Recovery
• Write a DR plan
‒ Document everything – to tedious levels of detail
‒ Include actual commands, not...
Example Exercises from MQ Development
• Different groups have different activities that must continue
‒ Realistic scenario...
Networking Considerations
• DNS - You will probably redirect hostnames to a new site
‒ But will you also keep the same IP ...
A Real MQ Network Story
• Customer did an IP move during a DR test
• Forgot to do the IP move back when they returned to p...
Other Resources
• Applications may need to deal with replay or loss of data.
‒ Decide whether to clear queues down to a kn...
If a Real Disaster Hits
• Hopefully you never need it. But if the worst happens:
• Follow your tested plan
‒ Don’t try sho...
Summary
• Various ways of recovering queue managers
• Plan what you need to recover for MQ
• Plan the relationship with ot...
Notices and Disclaimers
66
Copyright © 2016 by International Business Machines Corporation (IBM). No part of this document...
Notices and Disclaimers Con’t.
67
Information concerning non-IBM products was obtained from the suppliers of those product...
Thank You
Your Feedback is Important!
Access the InterConnect 2016 Conference Attendee
Portal to complete your session sur...
Upcoming SlideShare
Loading in …5
×

IBM MQ - High Availability and Disaster Recovery

Presentation given at Interconnect 2016 on HA and DR techniques for use with IBM MQ

IBM MQ - High Availability and Disaster Recovery

  1. 1. HHM-3416: IBM MQ High Availability and Disaster Recovery Mark Taylor marke_taylor@uk.ibm.com IBM Hursley
  2. 2. Please Note IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion. Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code or functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion. Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multiprogramming in the user’s job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.
  3. 3. Abstract • IBM MQ provides many capabilities that will keep your data safe and your business running in the event of failures whether you're running on your own systems in your data centres, on-premise IaaS, IaaS in a public cloud, or a hybrid cloud across all of these. This session introduces you to the solutions available and how they can be effectively used together to build extremely reliable environments providing HA and DR for messaging on- premise in in the hybrid cloud.
  4. 4. Introduction • Availability is a very large subject • You can have the best technology in the world, but you have to manage it correctly • Technology is not a substitute for good planning and testing!
  5. 5. What is DR • Getting applications running after a major (often whole-site) failure or loss • It is not about High Availability although often the two are related and share design and implementation choices ‒ “HA is having 2, DR is having them a long way apart” ‒ More seriously, HA is about keeping things running, while DR is about recovering when HA has failed. • Requirements driven by business, and often by regulators ‒ Data integrity, timescales, geography … • One major decision point: cost ‒ How much does DR cost you, even if it’s never used? ‒ How much are you prepared to lose
  6. 6. Disaster Recovery vs High Availability • Designs for HA typically involve a single site for each component of the overall architecture • Designs for DR typically involve separate sites • Designs for HA (and CA) typically require no data loss • Designs for DR typically can have limited data loss • Designs for HA typically involve high-speed takeover • Designs for DR typically can permit several hours down-time
  7. 7. HIGH AVAILABILITY
  8. 8. Single Points of Failure • With no redundancy or fault tolerance, a failure of any component can lead to a loss of availability • Every component is critical. The system relies on the: ‒ Power supply, system unit, CPU, memory ‒ Disk controller, disks, network adapter, network cable ‒ ...and so on • Various techniques have been developed to tolerate failures: ‒ UPS or dual supplies for power loss ‒ RAID for disk failure ‒ Fault-tolerant architectures for CPU/memory failure ‒ ...etc • Elimination of SPOFs is important to achieve HA
  9. 9. IBM MQ HA technologies • Queue manager clusters • Queue-sharing groups • Support for networked storage • Multi-instance queue managers • MQ Appliance • HA clusters • Client reconnection
  10. 10. Queue Manager Clusters • Sharing cluster queues on multiple queue managers prevents a queue from being a SPOF • Cluster workload algorithm automatically routes traffic away from failed queue managers
  11. 11. Queue-Sharing Groups • On z/OS, queue managers can be members of a queue-sharing group • Shared queues are held in a coupling facility ‒ All queue managers in the QSG can access the messages • Benefits: ‒ Messages remain available even if a queue manager fails ‒ Pull workload balancing ‒ Apps can connect to the group Queue manager Private queues Queue manager Private queues Queue manager Private queues Shared queues
  12. 12. Introduction to Failover and MQ • Failover is the automatic switching of availability of a service ‒ For MQ, the “service” is a queue manager • Traditionally the preserve of an HA cluster, such as HACMP • Requires: ‒ Data accessible on all servers ‒ Equivalent or at least compatible servers  Common software levels and environment ‒ Sufficient capacity to handle workload after failure  Workload may be rebalanced after failover requiring spare capacity ‒ Startup processing of queue manager following the failure • MQ offers several ways of configuring for failover: ‒ Multi-instance queue managers ‒ HA clusters ‒ MQ Appliance
  13. 13. Failover considerations • Failover times are made up of three parts: ‒ Time taken to notice the failure  Heartbeat missed  Bad result from status query ‒ Time taken to establish the environment before activating the service  Switching IP addresses and disks, and so on ‒ Time taken to activate the service  This is queue manager restart • Failover involves a queue manager restart ‒ Nonpersistent messages, nondurable subscriptions discarded • For fastest times, ensure that queue manager restart is fast ‒ No long running transactions, for example
  14. 14. MULTI-INSTANCE QUEUE MANAGERS
  15. 15. Multi-instance Queue Managers • Basic failover support without HA cluster • Two instances of a queue manager on different machines ‒ One is the “active” instance, other is the “standby” instance ‒ Active instance “owns” the queue manager’s files  Accepts connections from applications ‒ Standby instance monitors the active instance  Applications cannot connect to the standby instance  If active instance fails, standby restarts queue manager and becomes active • Instances are the SAME queue manager – only one set of data files ‒ Queue manager data is held in networked storage
  16. 16. Multi-instance Queue Managers 1. Normal execution Owns the queue manager data MQ Client Machine A Machine B QM1 QM1 Active instance QM1 Standby instance can fail-over MQ Client network 168.0.0.2168.0.0.1 networked storage
  17. 17. Multi-instance Queue Managers 2. Disaster strikes MQ Client Machine A Machine B QM1 QM1 Active instance QM1 Standby instance locks freed MQ Client network IPA networked storage 168.0.0.2 Client connections broken
  18. 18. Multi-instance Queue Managers 3. FAILOVER Standby becomes active MQ Client Machine B QM1 QM1 Active instance MQ Client network networked storage Owns the queue manager data 168.0.0.2 Client connection still broken
  19. 19. Multi-instance Queue Managers 4. Recovery complete MQ Client Machine B QM1 QM1 Active instance MQ Client network networked storage Owns the queue manager data 168.0.0.2 Client connections reconnect
  20. 20. Dealing with multiple IP addresses • The IP address of the queue manager changes when it moves ‒ So MQ channel configuration needs way to select address • Connection name syntax extended to a comma-separated list ‒ CONNAME(‘168.0.0.1,168.0.0.2’) ‒ Needs 7.0.1 qmgr or client • Unless you use external IPAT or an intelligent router or MR01 • WAS8 admin panels understand this syntax. • For earlier levels of WAS ‒ Connection Factories:  Set a custom property called XMSC_WMQ_CONNECTION_NAME_LIST to the list of host/port names that you wish to connect to  Make sure that the existing host and port values defined on the connection factory match the first entry in this property ‒ Activation Specs:  Set a custom property called connectionNameList on the activation spec with the same format
  21. 21. HA CLUSTERS
  22. 22. HA clusters • MQ traditionally made highly available using an HA cluster ‒ IBM PowerHA for AIX (formerly HACMP), Veritas Cluster Server, Microsoft Cluster Server, HP Serviceguard, … • HA clusters can: ‒ Coordinate multiple resources such as application server, database ‒ Consist of more than two machines ‒ Failover more than once without operator intervention ‒ Takeover IP address as part of failover ‒ Likely to be more resilient in cases of MQ and OS defects
  23. 23. HA clusters • In HA clusters, queue manager data and logs are placed on a shared disk ‒ Disk is switched between machines during failover • The queue manager has its own “service” IP address ‒ IP address is switched between machines during failover ‒ Queue manager’s IP address remains the same after failover • The queue manager is defined to the HA cluster as a resource dependent on the shared disk and the IP address ‒ During failover, the HA cluster will switch the disk, take over the IP address and then start the queue manager
  24. 24. HA cluster MQ in an HA cluster – Active/active Normal execution MQ Client Machine A Machine B QM1 Active instance MQ Client network 168.0.0.1 QM2 Active instance QM2 data and logs QM1 data and logs shared disk 168.0.0.2
  25. 25. HA cluster MQ in an HA cluster – Active/active FAILOVER MQ Client Machine A Machine B MQ Client network 168.0.0.1 QM2 Active instance QM2 data and logs QM1 data and logs shared disk 168.0.0.2 QM1 Active instance Shared disk switched IP address takeover Queue manager restarted
  26. 26. Multi-instance QM or HA cluster? • Multi-instance queue manager  Integrated into the MQ product  Faster failover than HA cluster  Delay before queue manager restart is much shorter  Runtime performance of networked storage  System administrator responsible for restarting a standby instance after failover • HA cluster  Capable of handling a wider range of failures  Failover historically rather slow, but some HA clusters are improving  Capable of more flexible configurations (eg N+1)  Extra product purchase and skills required • Storage distinction • Multi-instance queue manager typically uses NAS • HA clustered queue manager typically uses SAN
  27. 27. Primary Secondary High Availability for the MQ Appliance • IBM MQ Appliances can be deployed in HA pairs ‒ Primary instance of queue manager runs on one ‒ Secondary instance on the other for HA protection • Primary and secondary work together ‒ Operations on primary automatically replicated to secondary ‒ Appliances monitor one another and perform local restart/failover • Easier config than other HA solutions (no shared file system/shared disks) • Supports manual failover, e.g. for rolling upgrades • Replication is synchronous over Ethernet, for 100% fidelity ‒ Routable but not intended for long distances
  28. 28. Setting up HA for the Appliance • The following command is run from appl1: ‒ prepareha -s <some random text> -a <address of appl2> • The following command is run from appl2: ‒ crthagrp -s <the same random text> -a <address of appl1> • crtmqm -sx HAQM1 • Note that there is no need to run strmqm
  29. 29. Virtual Images • Another mechanism being regularly used • When MQ is in a virtual machine … simply shoot and restart the VM • “Turning it off and back on again” • Can be faster than any other kind of failover
  30. 30. MQ Virtual System Pattern for PureApplication System • MQ Pattern for Pure supports multi-instance HA model • Exploits GPFS storage • Use the pattern editor to create a virtual machine containing MQ and GPFS
  31. 31. How it works 31 Single Rack Multi-Rack
  32. 32. Pattern Builder 32
  33. 33. Pattern Builder – Configuration model 33
  34. 34. Pattern Builder – Active QM 34
  35. 35. Pattern Builder – Standby QM 35
  36. 36. Shared Queues, HP NonStop Server continuous continuous MQ Clusters none continuous continuousautomatic automatic automatic none none HA Clustering, Multi-instance No special support Access to existing messages Access for new messages Comparison of Technologies
  37. 37. APPLICATIONS AND AUTO-RECONNECTION
  38. 38. HA applications – MQ connectivity • If an application loses connection to a queue manager, what does it do? ‒ End abnormally ‒ Handle the failure and retry the connection ‒ Reconnect automatically thanks to application container  WebSphere Application Server contains logic to reconnect JMS clients ‒ Use MQ automatic client reconnection
  39. 39. Automatic client reconnection • MQ client automatically reconnects when connection broken ‒ MQI C clients and standalone JMS clients ‒ JMS in app servers (EJB, MDB) does not need auto-reconnect • Reconnection includes reopening queues, remaking subscriptions ‒ All MQI handles keep their original values • Can reconnect to same queue manager or another, equivalent queue manager • MQI or JMS calls block until connection is remade ‒ By default, will wait for up to 30 minutes ‒ Long enough for a queue manager failover (even a really slow one)
  40. 40. Automatic client reconnection • Can register event handler to observe reconnection • Not all MQI is seamless, but majority repaired transparently ‒ Browse cursors revert to the top of the queue ‒ Nonpersistent messages are discarded during restart ‒ Nondurable subscriptions are remade and may miss some messages ‒ In-flight transactions backed out • Tries to keep dynamic queues with same name ‒ If queue manager doesn’t restart, reconnecting client’s TDQs are kept for a while in case it reconnects ‒ If queue manager does restart, TDQs are recreated when it reconnects
  41. 41. Automatic client reconnection • Enabled in application code, ini file or CLNTCONN definition ‒ MQI: MQCNO_RECONNECT, MQCNO_RECONNECT_Q_MGR ‒ JMS: Connection factory properties • Plenty of opportunity for configuration ‒ Reconnection timeout ‒ Frequency of reconnection attempts • Requires: ‒ Threaded client ‒ 7.0.1 server – including z/OS ‒ Full-duplex client communications (SHARECNV >= 1)
  42. 42. Client Configurations for Availability • Use wildcarded queue manager names in CCDT ‒ Gets weighted distribution of connections ‒ Selects a “random” queue manager from an equivalent set • Use multiple addresses in a CONNAME ‒ Could potentially point at different queue managers ‒ More likely pointing at the same queue manager in a multi-instance setup • Use automatic reconnection • Pre-connect Exit from V7.0.1.4 • Use IP routers to select address from a list ‒ Based on workload or anything else known to the router
  43. 43. Application Patterns for availability • Article describing examples of how to build a hub topology supporting: ‒ Continuous availability to send MQ messages, with no single point of failure ‒ Linear horizontal scale of throughput, for both MQ and the attaching applications ‒ Exactly once delivery, with high availability of individual persistent messages ‒ Three messaging styles: Request/response, fire-and-forget, and pub/sub • http://www.ibm.com/developerworks/websphere/library/techarticles/1303_b roadhurst/1303_broadhurst.html
  44. 44. Disaster Recovery
  45. 45. What makes a Queue Manager on Dist? ini files Registry System ini files Registry QMgr Recovery Logs QFiles /var/mqm/log/QMGR /var/mqm/qmgrs/QMGR Obj Definiitions Security Cluster etc SSL Store
  46. 46. Backups • At minimum, backup definitions at regular intervals ‒ Include ini files and security settings • One view is there is no point to backing up messages ‒ They will be obsolete if they ever need to be restored ‒ Distributed platforms – data backup only possible when qmgr stopped • Use rcdmqimg on Distributed platforms to take images ‒ Channel sync information is recovered even for circular logs • Backup everything before upgrading code levels ‒ On Distributed, you cannot go back • Exclude queue manager data from normal system backups ‒ Some backup products interfere with MQ processing
  47. 47. What makes a Queue Manager on z/OS? ACTIVE LOGSARCHIVED LOGS LOG BSDS VSAM DS HIGHEST RBA CHKPT LIST LOG INVENTORY VSAM Linear DS Tapes VSAM DS PagesetsPrivate Objects Private Messages RECOV RBAs CPCPCP CF Shared Messages Shared Objects Group Objects DB2 ......
  48. 48. What makes up a Queue Manager? • Queue manager started task procedure ‒ Specifies MQ libraries to use, location of BSDS and pagesets and INP1, INP2 members start up processing • System Parameter Module – zParm ‒ Configuration settings for logging, trace and connection environments for MQ • BSDS: Vital for Queue Manager start up ‒ Contains info about log RBAs, checkpoint information and log dataset names • Active and Archive Logs: Vital for Queue Manager start up ‒ Contain records of all recoverable activity performed by the Queue Manager • Pagesets ‒ Updates made “lazily” and brought “up to date” from logs during restart ‒ Start up with an old pageset (restored backup) is not really any different from start up after queue manager failure ‒ Backup needs to copy page 0 of pageset first (don’t do volume backup!) • DB2 Configuration information & Group Object Definitions • Coupling Facility Structures ‒ Hold QSG control information and MQ messages
  49. 49. Backing Up a z/OS Queue Manager • Keep copies of ZPARM, MSTR procedure, product datasets and INP1/INP2 members • Use dual BSDS, dual active and dual archive logs • Take backups of your pagesets ‒ This can be done while the queue manager is running (fuzzy backups) ‒ Make sure you backup Page 0 first, REPRO or ADRDSSU logical copy • DB2 data should be backed up as part of the DB2 backup procedures • CF application structures should be backed up on a regular basis ‒ These are made in the logs of the queue manager where the backup was issued
  50. 50. Remote Recovery
  51. 51. Topologies • Sometimes a data centre is kept PURELY as the DR site • Sometimes 2 data centres are in daily use; back each other up for disasters ‒ Normal workload distributed to the 2 sites ‒ These sites are probably geographically distant • Another variation has 2 data centres “near” each other ‒ Often synchronous replication ‒ With a 3rd site providing a long-distance backup • And of course further variations and combinations of these
  52. 52. Queue Manager Connections • DR topologies have little difference for individual queue managers • But they do affect overall design ‒ Where do applications connect to ‒ How are messages routed • Clients need ClntConn definitions that reach any machine • Will be affected by how you manage network ‒ Do DNS names move with the site? ‒ Do IP addresses move with the site? • Some sites always put IP addresses in CONNAME; others use hostname ‒ No rule on which is better
  53. 53. Disk replication • Disk replication can be used for MQ disaster recovery • Either synchronous or asynchronous disk replication is OK ‒ Synchronous:  No data loss if disaster occurs  Performance is impacted by replication delay  Limited by distance (eg 100km) ‒ Asynchronous:  Some limited data loss if disaster occurs  It is critical that queue manager data and logs are replicated in the same consistency group if replicating both • Disk replication cannot be used between the active and standby instances of a multi-instance queue manager ‒ Could be used to replicate to a DR site in addition though
  54. 54. Primary Secondary DR for the MQ Appliance • IBM MQ Appliances can now be deployed with DR option • Similar to HA design ‒ But using async replication for longer-distance DR Role Primary Secondary DR Status Normal Sync in progress Partitioned Remote Appliance(s) Unavailable Inactive Inconsistent Remote appliance(s) not configured DR Synchronization progress XX% complete DR estimated synchronization time Estimated absolute time at which the synchronization will complete Out of sync data Amount of data in KB written to this instance since the Partition
  55. 55. Combining HA and DR QM1 Machine A Active instance Machine B Standby instance shared storage Machine C Backup instance QM1Replication Primary Site Backup Site HA Pair
  56. 56. Combining HA and DR – “Active/Active” QM1 Machine A Active instance Machine B Standby instance Machine C Backup instance QM1 Replication Site 1 Site 2 HA Pair Machine C Active instance Machine D Standby instance Machine A Backup instance QM2 QM2 Replication HA Pair
  57. 57. Integration with other products • May want to have consistency with other data resources ‒ For example, databases and app servers • Only way for guaranteed consistency is disk replication where all logs are in same group ‒ Otherwise transactional state might be out of sync
  58. 58. Planning and Testing
  59. 59. Planning for Recovery • Write a DR plan ‒ Document everything – to tedious levels of detail ‒ Include actual commands, not just a description of the operation  Not “Stop MQ”, but “as mqm, run /usr/local/bin/stopmq.sh US.PROD.01” • And test it frequently ‒ Recommend twice a year ‒ Record time taken for each task • Remember that the person executing the plan in a real emergency might be under-skilled and over-pressured ‒ Plan for no access to phones, email, online docs … • Each test is likely to show something you’ve forgotten ‒ Update the plan to match ‒ You’re likely to have new applications, hardware, software … • May have different plans for different disaster scenarios
  60. 60. Example Exercises from MQ Development • Different groups have different activities that must continue ‒ Realistic scenarios can help show what might not be available • From the MQ development lab … • Most of the change team were told there was a virulent disease and they had to work from home ‒ Could they continue to support customers • If Hursley machine room was taken out by a plane missing its landing at Southampton airport ‒ Could we carry on developing the MQ product ‒ Source code libraries, build machines, test machines … ‒ Could fixes be produced • (A common one) Someone hit emergency power-off button • Not just paper exercises
  61. 61. Networking Considerations • DNS - You will probably redirect hostnames to a new site ‒ But will you also keep the same IP addresses? ‒ Including NAT when routing to external partners? ‒ Affects CONNAME • Include external organisations in your testing ‒ 3rd parties may have firewalls that do not recognize your DR servers • LOCLADDR configuration ‒ Not normally used by MQ, but firewalls, IPT and channel exits may inspect it ‒ May need modification if a machine changes address • Clustering needs special consideration ‒ Easy to accidentally join the real cluster and start stealing messages ‒ Ideally keep network separated, but can help by:  Not giving backup ‘live’ security certs  Not starting chinit address space (z/OS)  Not allowing channel initiators to start (distributed)  Use CHLAUTH rules • Backup will be out of sync with the cluster ‒ REFRESH CLUSTER() resolves updates
  62. 62. A Real MQ Network Story • Customer did an IP move during a DR test • Forgot to do the IP move back when they returned to prime systems • Didn’t have monitoring in place that picked this up until users complained about lack of response
  63. 63. Other Resources • Applications may need to deal with replay or loss of data. ‒ Decide whether to clear queues down to a known state, or enough information elsewhere to manage replays • Order of recovery may change with different product releases ‒ Every time you install a new version of a product revisit your DR plan • What do you really need to recover ‒ DR site might be lower-power than primary site ‒ Some apps might not be critical to the business ‒ But some might be unrecognised prereqs
  64. 64. If a Real Disaster Hits • Hopefully you never need it. But if the worst happens: • Follow your tested plan ‒ Don’t try shortcuts • But also, if possible: ‒ Get someone to take notes and keep track of the time tasks took ‒ Prepare to attend post mortem meetings on steps you took to recover ‒ Accept all offers of assistance • And afterwards: ‒ Update your plan for the next time
  65. 65. Summary • Various ways of recovering queue managers • Plan what you need to recover for MQ • Plan the relationship with other resources • Test your plan
  66. 66. Notices and Disclaimers 66 Copyright © 2016 by International Business Machines Corporation (IBM). No part of this document may be reproduced or transmitted in any form without written permission from IBM. U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM. Information in these presentations (including information relating to products that have not yet been announced by IBM) has been reviewed for accuracy as of the date of initial publication and could include unintentional technical or typographical errors. IBM shall have no responsibility to update this information. THIS DOCUMENT IS DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED. IN NO EVENT SHALL IBM BE LIABLE FOR ANY DAMAGE ARISING FROM THE USE OF THIS INFORMATION, INCLUDING BUT NOT LIMITED TO, LOSS OF DATA, BUSINESS INTERRUPTION, LOSS OF PROFIT OR LOSS OF OPPORTUNITY. IBM products and services are warranted according to the terms and conditions of the agreements under which they are provided. Any statements regarding IBM's future direction, intent or product plans are subject to change or withdrawal without notice. Performance data contained herein was generally obtained in a controlled, isolated environments. Customer examples are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual performance, cost, savings or other results in other operating environments may vary. References in this document to IBM products, programs, or services does not imply that IBM intends to make such products, programs or services available in all countries in which IBM operates or does business. Workshops, sessions and associated materials may have been prepared by independent session speakers, and do not necessarily reflect the views of IBM. All materials and discussions are provided for informational purposes only, and are neither intended to, nor shall constitute legal or other guidance or advice to any individual participant or their specific situation. It is the customer’s responsibility to insure its own compliance with legal requirements and to obtain advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulatory requirements that may affect the customer’s business and any actions the customer may need to take to comply with such laws. IBM does not provide legal advice or represent or warrant that its services or products will ensure that the customer is in compliance with any law
  67. 67. Notices and Disclaimers Con’t. 67 Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products in connection with this publication and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. IBM does not warrant the quality of any third-party products, or the ability of any such third-party products to interoperate with IBM’s products. IBM EXPRESSLY DISCLAIMS ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. The provision of the information contained h erein is not intended to, and does not, grant any right or license under any IBM patents, copyrights, trademarks or other intellectual property right. IBM, the IBM logo, ibm.com, Aspera®, Bluemix, Blueworks Live, CICS, Clearcase, Cognos®, DOORS®, Emptoris®, Enterprise Document Management System™, FASP®, FileNet®, Global Business Services ®, Global Technology Services ®, IBM ExperienceOne™, IBM SmartCloud®, IBM Social Business®, Information on Demand, ILOG, Maximo®, MQIntegrator®, MQSeries®, Netcool®, OMEGAMON, OpenPower, PureAnalytics™, PureApplication®, pureCluster™, PureCoverage®, PureData®, PureExperience®, PureFlex®, pureQuery®, pureScale®, PureSystems®, QRadar®, Rational®, Rhapsody®, Smarter Commerce®, SoDA, SPSS, Sterling Commerce®, StoredIQ, Tealeaf®, Tivoli®, Trusteer®, Unica®, urban{code}®, Watson, WebSphere®, Worklight®, X-Force® and System z® Z/OS, are trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at: www.ibm.com/legal/copytrade.shtml.
  68. 68. Thank You Your Feedback is Important! Access the InterConnect 2016 Conference Attendee Portal to complete your session surveys from your smartphone, laptop or conference kiosk.

×