Building an Active-Active IBM MQ Systemmatthew1001
Shows how message availability and service availability can be configured to reduce downtime and improve overall availability of your MQ network. Demonstrates how Uniform Clusters can be used to help keep your service availability high.
IBM Think 2018: IBM MQ High AvailabilityJamie Squibb
An overview of IBM MQ's high availability capabilities, plus a deeper dive in to the new Replicated Data Queue Manager (RDQM) feature that is available in IBM MQ V9.0.4 on Linux.
Enterprise messaging and IBM MQ is a critical part of any system, this session shows you how MQ is rapidly evolving to meet your needs. Irrespective of your platform or environment, this session introduces many of the updates to MQ in 2019 and 2020, whether that's in administration, building fault tolerant, scalable messaging solutions, or securing your systems.
IBM MQ and Kafka, what is the difference?David Ware
Message queueing solutions used to be the one general purpose tool used for all asynchronous application patterns, then along came event streaming as an application model. To support this effectively needed a whole new approach to how messages are handled by the messaging technology. Now the tables are turned and many are wondering if an event streaming solution can be used for all their asynchronous application patterns from now on. But just as message queueing solutions work in a way to optimize for their core use cases, so do event streaming solutions, and these behaviors directly affect the applications that use them. This session picks IBM MQ and Kafka to look at how they compare and, more importantly, differ in their behavior so that you can decide which application scenarios are best suited by each. Spoiler -they're both good in their own way!
High availability of a messaging system is essential. This is especially true for IBM MQ systems which are absolutely critical to the smooth running of many enterprises. IBM MQ Advanced made achieving high availability even easier with Replicated Data Queue Managers. Learn how this and other HA capabilities fits into a system that provides both high availability of the messaging system as a whole and every last piece of critical messaging data that you care about.
Building an Active-Active IBM MQ Systemmatthew1001
Shows how message availability and service availability can be configured to reduce downtime and improve overall availability of your MQ network. Demonstrates how Uniform Clusters can be used to help keep your service availability high.
IBM Think 2018: IBM MQ High AvailabilityJamie Squibb
An overview of IBM MQ's high availability capabilities, plus a deeper dive in to the new Replicated Data Queue Manager (RDQM) feature that is available in IBM MQ V9.0.4 on Linux.
Enterprise messaging and IBM MQ is a critical part of any system, this session shows you how MQ is rapidly evolving to meet your needs. Irrespective of your platform or environment, this session introduces many of the updates to MQ in 2019 and 2020, whether that's in administration, building fault tolerant, scalable messaging solutions, or securing your systems.
IBM MQ and Kafka, what is the difference?David Ware
Message queueing solutions used to be the one general purpose tool used for all asynchronous application patterns, then along came event streaming as an application model. To support this effectively needed a whole new approach to how messages are handled by the messaging technology. Now the tables are turned and many are wondering if an event streaming solution can be used for all their asynchronous application patterns from now on. But just as message queueing solutions work in a way to optimize for their core use cases, so do event streaming solutions, and these behaviors directly affect the applications that use them. This session picks IBM MQ and Kafka to look at how they compare and, more importantly, differ in their behavior so that you can decide which application scenarios are best suited by each. Spoiler -they're both good in their own way!
High availability of a messaging system is essential. This is especially true for IBM MQ systems which are absolutely critical to the smooth running of many enterprises. IBM MQ Advanced made achieving high availability even easier with Replicated Data Queue Managers. Learn how this and other HA capabilities fits into a system that provides both high availability of the messaging system as a whole and every last piece of critical messaging data that you care about.
These charts provide a high-level overview of IIB HA topologies:
• Comparison of active/active and active/passive HA
• Solutions for active/passive HA failover with IBM Integration Bus
• Solutions for active/active processing with IBM Integration Bus
• Adding Global Cache to active/active processing
• Combining all of the above
Only HTTP and JMS (MQ) workloads are shown
Intro video here - https://youtu.be/MWsoXPFHY5Q
Can you afford an outage? What happens if one occurs? IBM MQ brings you the capabilities to build active-active solutions for continuous availability and to scale out a system horizontally. This presentation shows you how to use MQ to its fullest, stepping away from single queue managers and utilising MQ clusters and the new Uniform Cluster pattern which automatically keeps your applications balanced, no matter what happens.
IBM MQ Whats new - including 9.3 and 9.3.1Robert Parker
I presented at the IBM MQ French User Group in Paris on the topic of What's new in MQ. I covered both what was new in IBM MQ 9.3 LTS and what was new in the latest IBM MQ 9.3.1 CD release.
This presentation is an overview of IBM App Connect, a new solution for business users to connect the apps they use everyday to automate their workflow and free up more time to get back to the work that matters to them. Learn more about App Connect here: http://ibm.co/1pNVwgV
IBM MQ: An Introduction to Using and Developing with MQ Publish/SubscribeDavid Ware
IBM MQ allows application programmers to use the publish/subscribe application model with ease. This session takes you through the fundamental publish/subscribe concepts and how they relate to IBM MQ. Covering aspects of system design, configuration and application programming, this session is essential for all users looking to adopt publish/subscribe with IBM MQ.
WebSphere MQ includes a alternative of APIs and supports the Java™ Message Service (JMS) API. WebSphere MQ is that the market-leading messaging integration middleware product. Originally introduced in 1993 (under the IBM MQSeries® name), WebSphere MQ provides associate degree an, reliable, scalable, secure, and superior transport mechanism to handle businesses property necessities.
IBM DataPower Gateway appliances are used in a variety of user scenarios to enable security, control, integration and optimized access for a range of workloads including Mobile, Web, API, B2B, Web Services and SOA. This presentation from the IBM DataPower team provides an in-depth look at each use case.
414: Build an agile CI/CD Pipeline for application integrationTrevor Dolby
This presentation was originally presented at IBM TechCon 2021. Many CI/CD practices are well known - but how do they apply when 'Integration' itself is the primary deliverable? Pipelines and testing are ubiquitous in the modern software world, and integration often brings greater fun challenges in this area. Come and join us as we showcase where the challenges are and how IBM App Connect meets this with unit test capability for shift-left testing and early-stage pipeline use, efficient application packaging & container image construction, and flexible runtime configuration.
Här har ni en presentation om WebSphere Application Server.
Titta närmare på området på dessa länkar: Application Infrastructure (http://www-03.ibm.com/software/products/sv/category/SW600) respektive Connectivity & Integration (http://www-03.ibm.com/software/products/sv/category/SW666).
CICS Transaction Gateway V9.1 OverviewRobert Jones
CICS TG V9.1 enables simple and rapid mobile integration of your enterprise CICS Transaction Server (CICS TS) family or TXSeries™ environment. You can build on your existing, proven architecture to quickly provide mobile connectivity to back-end systems by using JavaScript™ Object Notation (JSON) web services.
A complete overview of the IBM CICS Transaction Gateway V9.1 products:
CICS Transaction Gateway for z/OS V9.1
CICS Transaction Gateway for Multiplatforms V9.1
CICS Transaction Gateway Desktop Edition V9.1
Product datasheet: https://ibm.biz/cicstg91datasheet
Crossing the river by feeling the stones from legacy to cloud native applica...OPNFV
Doug Smith, Red Hat, Inc, Gergely Csatari, Nokia
There is an anecdote about a tourist lost in the middle of the countryside in Ireland, who pulls over and asks a local, "How can I get to Galway from here?" To which the local, after thinking for some time, responds, "If I was going to Galway, I wouldn't start from here at all."
Cloud native application development can feel like that sometimes, especially in the telecom industry. I have an application, it's running fine on a bare metal server, and now I am expected to make it resilient, scale-out, cloud native, microservice architecture, buzzword compliant. But how do you get there from where you are?
This presentation will present the hero's quest, identifying the key constraint to cloud resiliency at each stage, and identifying measures for addressing them. By showing the evolution story from the perspective of two applications, including a real telecom application, this presentation addresses the practical problems. The approach is not "rewrite your app from scratch", it is refactoring for incremental improvements.
Doug and Gergely will address the automation of application deployment and configuration, separation of state from behaviour, clustering, handling storage for cloud native applications, monitoring and event management, and container orchestration, so that, at each step along the journey, you know what problem you are solving, and how to get to the next step from where you are.
This presentation is in addition to a series of workshops held at the summit sponsored by the Cloud Native Computing Foundation and organized by Dave Neary, and includes a short summary of the topics presented in those workshops in addition to the perspectives on how to complete the quest to cloud native applications.
These charts provide a high-level overview of IIB HA topologies:
• Comparison of active/active and active/passive HA
• Solutions for active/passive HA failover with IBM Integration Bus
• Solutions for active/active processing with IBM Integration Bus
• Adding Global Cache to active/active processing
• Combining all of the above
Only HTTP and JMS (MQ) workloads are shown
Intro video here - https://youtu.be/MWsoXPFHY5Q
Can you afford an outage? What happens if one occurs? IBM MQ brings you the capabilities to build active-active solutions for continuous availability and to scale out a system horizontally. This presentation shows you how to use MQ to its fullest, stepping away from single queue managers and utilising MQ clusters and the new Uniform Cluster pattern which automatically keeps your applications balanced, no matter what happens.
IBM MQ Whats new - including 9.3 and 9.3.1Robert Parker
I presented at the IBM MQ French User Group in Paris on the topic of What's new in MQ. I covered both what was new in IBM MQ 9.3 LTS and what was new in the latest IBM MQ 9.3.1 CD release.
This presentation is an overview of IBM App Connect, a new solution for business users to connect the apps they use everyday to automate their workflow and free up more time to get back to the work that matters to them. Learn more about App Connect here: http://ibm.co/1pNVwgV
IBM MQ: An Introduction to Using and Developing with MQ Publish/SubscribeDavid Ware
IBM MQ allows application programmers to use the publish/subscribe application model with ease. This session takes you through the fundamental publish/subscribe concepts and how they relate to IBM MQ. Covering aspects of system design, configuration and application programming, this session is essential for all users looking to adopt publish/subscribe with IBM MQ.
WebSphere MQ includes a alternative of APIs and supports the Java™ Message Service (JMS) API. WebSphere MQ is that the market-leading messaging integration middleware product. Originally introduced in 1993 (under the IBM MQSeries® name), WebSphere MQ provides associate degree an, reliable, scalable, secure, and superior transport mechanism to handle businesses property necessities.
IBM DataPower Gateway appliances are used in a variety of user scenarios to enable security, control, integration and optimized access for a range of workloads including Mobile, Web, API, B2B, Web Services and SOA. This presentation from the IBM DataPower team provides an in-depth look at each use case.
414: Build an agile CI/CD Pipeline for application integrationTrevor Dolby
This presentation was originally presented at IBM TechCon 2021. Many CI/CD practices are well known - but how do they apply when 'Integration' itself is the primary deliverable? Pipelines and testing are ubiquitous in the modern software world, and integration often brings greater fun challenges in this area. Come and join us as we showcase where the challenges are and how IBM App Connect meets this with unit test capability for shift-left testing and early-stage pipeline use, efficient application packaging & container image construction, and flexible runtime configuration.
Här har ni en presentation om WebSphere Application Server.
Titta närmare på området på dessa länkar: Application Infrastructure (http://www-03.ibm.com/software/products/sv/category/SW600) respektive Connectivity & Integration (http://www-03.ibm.com/software/products/sv/category/SW666).
CICS Transaction Gateway V9.1 OverviewRobert Jones
CICS TG V9.1 enables simple and rapid mobile integration of your enterprise CICS Transaction Server (CICS TS) family or TXSeries™ environment. You can build on your existing, proven architecture to quickly provide mobile connectivity to back-end systems by using JavaScript™ Object Notation (JSON) web services.
A complete overview of the IBM CICS Transaction Gateway V9.1 products:
CICS Transaction Gateway for z/OS V9.1
CICS Transaction Gateway for Multiplatforms V9.1
CICS Transaction Gateway Desktop Edition V9.1
Product datasheet: https://ibm.biz/cicstg91datasheet
Crossing the river by feeling the stones from legacy to cloud native applica...OPNFV
Doug Smith, Red Hat, Inc, Gergely Csatari, Nokia
There is an anecdote about a tourist lost in the middle of the countryside in Ireland, who pulls over and asks a local, "How can I get to Galway from here?" To which the local, after thinking for some time, responds, "If I was going to Galway, I wouldn't start from here at all."
Cloud native application development can feel like that sometimes, especially in the telecom industry. I have an application, it's running fine on a bare metal server, and now I am expected to make it resilient, scale-out, cloud native, microservice architecture, buzzword compliant. But how do you get there from where you are?
This presentation will present the hero's quest, identifying the key constraint to cloud resiliency at each stage, and identifying measures for addressing them. By showing the evolution story from the perspective of two applications, including a real telecom application, this presentation addresses the practical problems. The approach is not "rewrite your app from scratch", it is refactoring for incremental improvements.
Doug and Gergely will address the automation of application deployment and configuration, separation of state from behaviour, clustering, handling storage for cloud native applications, monitoring and event management, and container orchestration, so that, at each step along the journey, you know what problem you are solving, and how to get to the next step from where you are.
This presentation is in addition to a series of workshops held at the summit sponsored by the Cloud Native Computing Foundation and organized by Dave Neary, and includes a short summary of the topics presented in those workshops in addition to the perspectives on how to complete the quest to cloud native applications.
Could the “C” in HPC stand for Cloud?This paper examines aspects of computing important in HPC (compute and network bandwidth, compute and network latency, memory size and bandwidth, I/O, and so on) and how they are affected by various virtualization technologies. For more information on IBM Systems, visit http://ibm.co/RKEeMO.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
Effective administration of IBM Integration Bus - Sanjay NagchowdhuryKaren Broughton-Mabbitt
The latest fix pack releases of IBM Integration Bus (IIB) include many features that make administering the product easier. Discover the right ways to effectively administer and operate the product, and learn tips and tricks that should be in every IBM Integration Bus administrator's toolbox. We will also demonstrate the ability to consolidate information from multiple IIB installations using the ELK stack and LogMet on IBM Bluemix.
The essential elements of an Enterprise PaaS, such as faster delivery, intelligent capacity on demand, efficiency and security, high performance and how Apache Stratos (incubating) is delivering these aspects.
Comparison of Current Service Mesh ArchitecturesMirantis
Learn the differences between Envoy, Istio, Conduit, Linkerd and other service meshes and their components. Watch the recording including demo at: https://info.mirantis.com/service-mesh-webinar
Presenting the newest version of Cloudify - 4.6 including a orchestrated SD-WAN demo from MEF18 where Cloudify is used as the orchestration platform for uCPE based on containers.
The rise of microservices details how the software infrastructure of the future are changing. As corporations strive for competitive advantage, they must redesign their brownfield legacy applications and move them to the cloud. Agile Cloud applications follow microservices and cloudnative development patterns. Microservices architectures are enabled by Docker and Kubernetes. Both software are hosted by CNCF.
microservices architectures are being enhanced with a service mesh layer which simplifies the communication and management of cloudnative applications.
IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...Peter Broadhurst
An introduction to one possible MQ architecture - an active/active multiple queue manager client<->server environment.
Summary of detailed topology articles available here:
http://ow.ly/vrUUV
And MQDev blog+discussion on client attachment here:
http://ibm.co/MM8rMl
AnyMind Group Tech Talk - Microservices architecture with AWSNhân Nguyễn
How to define a well-organized Cloud Architecture is crucial ingredient for the success of any Start-up. This presentation will bring you lessons learnt at AnyMind Group on the Cloud and how to architect Microservices with Elastic Container Services and Docker using AWS.
Microservices Architecture with AWS @ AnyMind GroupGiang Tran
How to define a well-organized Cloud Architecture is crucial ingredient for the success of any Start-up. This presentation will bring you lessons learnt at AnyMind Group on the Cloud and how to architect Microservices with Elastic Container Services and Docker using AWS.
[WSO2Con Asia 2018] Microservices, Containers, and BeyondWSO2
This slide deck discussed what’s next in highly agile, massively distributed environments. It will focus on fine-tuned DevOps processes, governance, and observability in a massively distributed container native microservices platform.
Learn more: https://wso2.com/library/conference/2018/08/wso2con-asia-2018-microservices-containers-and-beyond/
Similar to IBM Cloud Integration Platform High Availability - Integration Tech Conference (20)
IBM Interconnect 2016
To address a diverse set of needs coming from many quadrants (IoT, Shadow IT, SaaS adoption, etc.), IBM recognizes that the integration market must take a revolutionary step to get ahead of the needs of our customers. Enter the "Hybrid Integration Platform,” IBM's vision to evolve into the next generation of highly-productive integration offerings. In this session, we describe how IBM's Hybrid Integration Platform draws together the capabilities of its constituent parts—IBM AppConnect, Cast Iron, IBM Integration Bus, API Management and Bluemix—into a cohesive set of integration capabilities to enable digital transformation for the enterprise. This is a technical session focusing on architecture and technical details.
IBM Interconnect 2016. This session outlines the offerings and initiatives that IBM provides around cloud and "as-a-service" messaging. We explain their roles and how they work together to deliver agility to business, while retaining the mission-critical reliability that enterprises have come to expect of IBM messaging. Topics include the work we are doing in IBM MQ Enterprise messaging to facilitate its deployment in public and private IaaS clouds, the use of MQ in Docker and how we are making it easier to build self-service deployments on-premise, the new MQ Light API and how it can be exploited from IBM Bluemix and "fast-speed of IT" systems of engagement, the MQ Light Service for IBM Bluemix and the work we are doing with the Apache Kafka project.
Project Zero PHP talk at JavaOne 2008.
This talk describes IBM WebSphere sMash and the PHP support within it. For more information visit http://www.projectzero.org
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
IBM Cloud Integration Platform High Availability - Integration Tech Conference
1. Integration Technical Conference 2019
E03: High Availability
for Cloud Integration
Platform
Rob Nicholson
rob_nicholson@uk.ibm.com
2. Integration Technical Conference 2019 2
Cloud Integration Platform High Availability
Understanding High Availability for the Cloud Integration Platform involves first understanding the overall architecture and the
High availability considerations for the container platform. Based on this we then explore HA for each Integration service.
Agenda:
• Overview of Cloud Integration Platform Architecture.
• Kubernetes and IBM Cloud Private High Availability Concerns.
• Deployment considerations
• MQ High Availability
• Event Streams High Availability
• API Connect High Availability
• App Connect High Availability
• Aspera High Availability
3. 3IntegrationTechnical Conference 2019
Cloud Integration
PlatformArchitecture
ICP Services
IAM Logging
Monitoring Metering
Integration Services
MQ
Event Streams
App Connect DataPower
API Connect Aspera
Cloud Integration Platform Management UI
ICP Services UI Components Integration Services UI Components
Kubernetes
Master/Proxy
Nodes
Management
Nodes
Worker
Nodes
Worker
Nodes
Worker
Nodes
• Cloud Integration Platform (CIP) deploys and manages
instances of Integration Services running on a Kubernetes
Infrastructure
• Instances of Integration services are deployed individually as
needed to satisfy each use-case.
• Services can be deployed in Highly Available topologies
across multiple Kubernetes worker nodes or non HA on a
single worker.
• Deployment and management is via the UI or CLI allowing
integration with CI/CD pipelines.
• CIP Leverages the IBM Cloud Private (ICP) services which run
on dedicated master/proxy and Management nodes in HA or
non HA configurations.
• The Management UI unifies the management UIs of the
Integration Services and the ICP services.
4. Integration Technical Conference 2019 4
Container Platform Availability
• Cloud Integration Platform is based on ICP Foundation.
• ICP Foundation is functionally identical to ICP Cloud Native edition. The only difference is licensing.
• All HA, sizing and deployment information provided for ICP Cloud Native edition applies to Cloud Integration
Platform.
• This presentation provides a summary. For more detail consult ICP Cloud Native edition documentation, articles
and education. See the links page at the end of this presentation.
5. 5IntegrationTechnical Conference 2019
Node Types
• A kubernetes Node is a VM or bare metal machine which is part
of a Kubernetes cluster.
• Master Nodes run the services that control the cluster including
the etcd database that stores the current state of the cluster.
• Proxy Nodes transmit external requests to the services created
inside your cluster.
• Management Nodes host management services such as
monitoring, metering, and logging.
• Worker Nodes run Integration Services. If Cloud Integration
Platform is running in a cluster with other workloads then these
also run on the worker nodes.
• Master, Proxy and Management node types can be combined
together but it is recommended in production clusters to keep
them separate so that excessive workload does not impact
stability.
• The minimum theoretical configuration is therefore one
Master/Proxy/Management node and one Worker node. This
would not be Highly available and is unlikely to have enough CPU
to be usable for more than limited demos.
Note: There are other more obscure node types such as dedicated etcd nodes and vulnerability
advisor nodes but these are beyond the scope of this presentation
Cloud Integration Platform Cluster
Node types
Master
Worker
Proxy
Mngmt
Node
Cloud Integration Platform
Theoretical minimum cluster
Master/
Proxy/
Mngmt
Worker
6. 6IntegrationTechnical Conference 2019
Container Platform Availability
• To make the solution fully Highly Available, each component
must be deployed in HA topology.
• Master Nodes contain software that uses a quorum* paradigm
for high availability and so these must be deployed as an odd
number of nodes. Typically either 3 or 5 masters are used in a HA
cluster depending on the size of the cluster and the type of load.
• Proxy Nodes do not require a quorum so 2 or more Proxy nodes
are needed for HA
• Management Nodes do not require a quorum so 2 or more Proxy
nodes are needed for HA
• Worker Nodes run Integration Services. Depending on the
integration services required 2 or more worker nodes may be
needed for HA. (More detail in subsequent slides)
• As previously noted of course Master/Proxy/Management nodes
can be combined so a minimal configuration could be 3
Master/Proxy/Management nodes.
• A topology that is often used is: 3 Master/Proxies + 2
Management + 3 Workers.
Master Master Master
Proxy Proxy
Mngmt
Node
Mngmt
Node
Worker Worker
Cloud Integration Platform HA Cluster
Master/
Proxy/
Mngmt
Cloud Integration Platform minimal HA
Cluster
Master/
Proxy/
Mngmt
Master/
Proxy/
Mngmt
Worker Worker
8. Integration Technical Conference 2019 8
Deployment Considerations
• How many independent clusters do you need?
• Enterprises deploy multiple clusters for any/all of the following reasons:
• Geographic availability – Ability to survive a regional outage
• To separate organizations
• To separate development, test and production.
• To guard against errors by individual operators.
• Are the Kubernetes nodes deployed into separate failure domains?
• Separate Availability zones in a public cloud.
• Separate physical servers/racks in a data centre
• What are the common points of failure?
• See https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozon
10. 10IntegrationTechnical Conference 2019
Public Cloud Deployment
• ICP 3.1.1 is supported on IBM Cloud and Amazon.
• Current CIP is based on ICP 3.1.1
• ICP 3.1.2 adds support for azure Azure
• On public clouds it is recommended to use the cloud storage
solution.
• IBM Block Storage
• Amazon EBS
• Distribute cluster across 3 Availability zones with
less than 30ms latency.
• Multiple AZs in single region. Not multiple regions.
11. Integration Technical Conference 2019 11
High Availability for Integration services.
Integration Services run on the worker nodes.
The Cloud Integration SolutionPak is composed from the CloudPaks making from the component
products.
IBM cloudpaks are developed by the product development teams to embody best practice for deploying
the product onto kubernetes as a secure, scalable Highly Available deployment.
Thus, at a high level, all that has to be ensured to provide a highly available deployment is:
• There are sufficient nodes for the solution*
• The nodes are deployed into separate failure domains so that a single failure will not take out
multiple nodes. **
Kubernetes takes care of:
• Running appropriate numbers of instances of the solutionpak, spread across the available workers.
• Mixing together the workloads on the available worker nodes taking account the resources they
need.
• Restarting workloads that fail and recovering from failed nodes by scheduling workloads onto
alternative nodes.
* Some of the component products require 2 worker nodes for active/standby syle HA and others require 3 or more workers for
quorum style deployment.
** In larger clusters the nodes should be spread between 3 ore more availability zones.
Master Master Master
Proxy Proxy
Mngmt
Node
Mngmt
Node
Worker Worker
Cloud Integration Platform HA Cluster
Worker
12. 12IntegrationTechnical Conference 2019
Myth Busting
Master Master Master
Proxy Proxy
Mngmt
Node
Mngmt
Node
Worker Worker
Cloud Integration Platform HA Cluster
Worker
• MYTH 1: I need separate worker nodes for each
Integration Service
• MYTH 2: I cannot run multiple Solutionpaks on the
same nodes as ‘CloudNative’ workloads.
• REALITY: Kubernetes will schedule pods across the
available nodes based on its scheduling policies and
the declared resource requirements. This will almost
certainly mix workloads together on nodes unless you
explicitly prevent it.
• Its possible to use taints and annotations to control
which nodes a workload is scheduled to. (Advanced
topic not covered here)
13. Integration Technical Conference 2019 13
Summary - High Availability for Integration.
Product Approach(es) to HA Minimum number of worker nodes in the
the cluster
MQ Data Availability – Failover
Service availability – Active/Active
2
Event Streams Quorum 3
APIC Quorum 3
App Connect Stateless – Active/active
Stateful – Failover
2
Aspera Quorum 3
Datapower Quorum 3
At a high level. All you need to do is deploy the cluster in a HA configuration with at least 3 workers.
Deploy the MQ services from the helm charts and they will be highly available by default.
14. Integration Technical Conference 2019 14
Cloud Integration Platform High Availability
Understanding High Availability for the Cloud Integration Platform involves first understanding the overall architecture and the
High availability considerations for the container platform. Based on this we then explore HA for each Integration service.
Agenda:
• Overview of Cloud Integration Platform Architecture.
• Kubernetes and IBM Cloud Private High Availability Concerns.
• Deployment considerations
• MQ High Availability
• Event Streams High Availability
• API Connect High Availability
• App Connect High Availability
• Aspera High Availability
15. Integration Technical Conference 2019 15
IBM MQ High Availability Concepts
Availability of the ’MQ Service’.
• The “MQ Service” is available when applications can connect to an MQ queue and send and
receive messages.
• The MQ service can be made highly available by running multiple queue managers in
active/active configurations.
Availability of individual messages.
• An individual message that has been sent to an MQ Queue is Available when a client can receive
that particular message.
• The data can be made highly available by minimizing the time when the actual queue manager
holding the message is not available.
16. IBM MQ High Availability Options on Distributed Servers
HA
Storage
Local
Storage
Local
Storage
Local
Storage
Single resilient queue
manager
Multi-instance queue
manager
Replicated data
queue manager
HA
Storage
MQ MQ
MQ
(Stand-by)
MQ
MQ
(Stand-by)
MQ
(Stand-by)
Platform Load Balancing
Platform HA
Client load balancing
IBM MQ Product HA
Client or Server load balancing
IBM MQ Product HA
17. IBM MQ High Availability in Cloud Pak / CIP
HA
Storage
Local
Storage
Local
Storage
Local
Storage
Single resilient queue
manager
Multi-instance queue
manager
Replicated data
queue manager
HA
Storage
MQ MQ
MQ
(Stand-by)
MQ
MQ
(Stand-by)
MQ
(Stand-by)
Platform Load Balancing
Platform HA
Current Approach
Client load balancing
IBM MQ Product HA
Client or Server load balancing
IBM MQ Product HA
Investigating use of
multi-instance within
Kubernetes
Technology not supported
in Kubernetes
18. Container Orchestration
Single Resilient Queue Manager on
Kubernetes
• Container restarted by Kubernetes*
• Data persisted to external storage
• Restarted container connects to existing
storage
• IP Gateway routes traffic to the active
instance
• Implemented as Kubernetes Stateful set
of 1
• Implications: Both the service and the
messages stored in the single queue
manager are unavailable during failover
IP Gateway (from container platform)
HA
Network
Storage
Application
* Total Kubernetes node failure considerations described in
https://developer.ibm.com/messaging/2018/05/02/availability-scalability-ibm-mq-containers/
MQ
20. Horizontal Scaling
MQMQMQ
Application Application
Connection Routing
Messaging Layer
Application
Network Connection
Container(s)
Connection Routing
Scale the MQ
instances
based on
workload
requirements
Application ApplicationApplication
Connection Routing
Application
Connection Routing
Application
Connection Routing
Application
MQ PUT MQ PUT MQ PUT MQ PUT MQ PUT
MQ GET MQ GET MQ GET
22. Recommended Routing
MQMQMQ
Application Application
IP Gateway
Messaging Layer
Application
Network Connection
Container(s)
Connection Routing
Application ApplicationApplication
IP Gateway
Application
IP Gateway
Application
IP Gateway
Application
MQ PUT MQ PUT MQ PUT MQ PUT MQ PUT
MQ GET MQ GET MQ GET
CCDT CCDT CCDT CCDT CCDT
IP Gateway IP Gateway
23. Integration Technical Conference 2019 23
Cloud Integration Platform High Availability
Understanding High Availability for the Cloud Integration Platform involves first understanding the overall architecture and the
High availability considerations for the container platform. Based on this we then explore HA for each Integration service.
Agenda:
• Overview of Cloud Integration Platform Architecture.
• Kubernetes and IBM Cloud Private High Availability Concerns.
• Deployment considerations
• MQ High Availability
• Event Streams High Availability
• API Connect High Availability
• App Connect High Availability
• Aspera High Availability
24. 24IntegrationTechnical Conference 2019
Event Streams High
Availability
• Event streams deploysApache Kafka in a HA topology by
default.
• Chart deploys brokers as a stateful set with 3 members by
default.
• Kubernetes will schedule the pods to different nodes.
• Kafka’s architecture is inherently HA. Client connection
protocol handles discovery and failure.
• Default forTopics creation is 3 replicas with min in-sync
copies=2
• For Multi-AZ deployments number of brokers must
match number of AZs today.
24
Client
Intelligent client discovery and reconnection logic
Client
25. Integration Technical Conference 2019 25
Cloud Integration Platform High Availability
Understanding High Availability for the Cloud Integration Platform involves first understanding the overall architecture and the
High availability considerations for the container platform. Based on this we then explore HA for each Integration service.
Agenda:
• Overview of Cloud Integration Platform Architecture.
• Kubernetes and IBM Cloud Private High Availability Concerns.
• Deployment considerations
• MQ High Availability
• Event Streams High Availability
• API Connect High Availability
• App Connect High Availability
• Aspera High Availability
26. 26IntegrationTechnical Conference 2019
API Connect High Availability
ICP Cluster
Worker
Node 1
Worker
Node 2
Worker
Node 3
Gateway
Analytics
Portal
Manager
• API Connect requires 3Worker nodes for HA.
• Chart deploys as HA by default:
global setting: mode = standard (dev mode is non HA)
Cassandra cluster size = 3
Gateway replica count = 3
• Kubernetes distributes the resulting resources
across 3 (or more) nodes on the cluster for high
availability.
API Connect whitepaper discusses multi-cluster, multi
data centre and DR topics.
http://ibm.biz/APIC2018paper
27. Integration Technical Conference 2019 27
Cloud Integration Platform High Availability
Understanding High Availability for the Cloud Integration Platform involves first understanding the overall architecture and the
High availability considerations for the container platform. Based on this we then explore HA for each Integration service.
Agenda:
• Overview of Cloud Integration Platform Architecture.
• Kubernetes and IBM Cloud Private High Availability Concerns.
• Deployment considerations
• MQ High Availability
• Event Streams High Availability
• API Connect High Availability
• App Connect High Availability
• Aspera High Availability
28. 28IntegrationTechnical Conference 2019
App Connect HighAvailability
• ACE Control plane is HA (replicaset of 3) by
default.
• Integration servers deployed without local MQ
Queue Managers (QM) are stateless.
Deployed HA (replicaset of 3) by default.
BAR file is retrieved from control plane at startup.
• Integration servers deployed with a local QM are
deployed like MQ.
Single Resilient Queue Manager (stateful set of 1).
ICP Cluster
Worker
Node 1
Worker
Node 2
Worker
Node 3
ACE Control plane
ACE Stateless servers
ACE Stateful servers
CP CP CP
ACE
Server
ACE
Server
ACE
Server
ACE +
MQ
29. Integration Technical Conference 2019 29
Cloud Integration Platform High Availability
Understanding High Availability for the Cloud Integration Platform involves first understanding the overall architecture and the
High availability considerations for the container platform. Based on this we then explore HA for each Integration service.
Agenda:
• Overview of Cloud Integration Platform Architecture.
• Kubernetes and IBM Cloud Private High Availability Concerns.
• Deployment considerations
• MQ High Availability
• Event Streams High Availability
• API Connect High Availability
• App Connect High Availability
• Aspera High Availability
30. 30IntegrationTechnical Conference 2019
Aspera HSTS
• Aspera HSTS has an architecture that
can be deployed Highly Available.
• Aspera HSTS in CIP GA code cannot
be deployed as Highly available
directly from the helm chart.
• The deployment can be modified to
make it highly available.
31. 31IntegrationTechnical Conference 2019
Aspera HSTS – Routing HA
Aspera HSTS serves two types of traffic
• HTTPS API traffic
• This traffic is routed via ICP’s
Ingress Controller, with High
Availability
32. 32IntegrationTechnical Conference 2019
Aspera HSTS – Routing HA
• FASP Transfer traffic
• This traffic is routed by custom
proxies built by Aspera
• These can be scaled horizontally
as a ReplicaSet and are
redundantly available
• Proxies co-ordinate with Load
Balancer to find available
transfer pods.
• Load Balancer is stateless and
can be as ReplicaSet
34. 34IntegrationTechnical Conference 2019
Aspera HSTS – Application HA
• Aspera Transfer Pods are each
dedicated to a single transfer session
• Swarm Service manages Transfer
Pods– auto-scaling as needed and
ensuring free pods are available at all
times
• If the Swarm Service dies, no existing
traffic will be impacted, but no new
transfer pods will be scheduled until
it is restarted
• Aspera Swarm Service is currently
deployed as a single instance
• Swarm Service will be adding leader
election as part of L2 Certification
35. 35IntegrationTechnical Conference 2019
Aspera HSTS – Control Plane
• Stats Service consumes transfer
events from Kafka and publishes pod
availability for consumption by
Aspera Load Balancer
• Stats Service is currently a single
instance, but will be updated to use
Leader Election as part of L2
Certification
• If Stats Service fails, load-balancing
will be based on stale information
until Stats Service restarts
• Kafka is provided by IBM Event
Streams, which is an HA service
As a role of thumb memory requirement is 4X vCPU requirements.
For HA, master nodes require a quorum so it should be an odd number while management nodes and proxy nodes do not require a quorum
The workload (application and middleware) sizing determines the total capacity requirement and the number of worker nodes is derived from that
When thinking about the options for IBM MQ high availability on a cloud platform, initially you may consider the following:
Single Resilient Queue Manager: this is where you have a single instance of a queue manager, and the cloud environment monitors it and replaces the VM or container as necessary. From a container point of view this requires persistent storage, which is likely to be HA storage to overcome a host machine failure. Container failover times are typically fast, and comparable to a multi-instance queue manager, while those for other technologies varies. Client connections will be routed to the active instance by an external load balancing mechanism, which removes the need for the client to provide this load balancing. Some examples of these technologies include: Docker, Kuberenetes, VMWare HA / SRM and AWS Autoscaling.
Multi-instance queue manager: This is the “traditional” out of the box, high availability provided by IBM MQ. Multi-instance queue managers are instances of the same queue manager configured on different servers. One instance of the queue manager is defined as the active instance and another instance is defined as the standby instance. If the active instance fails, the multi-instance queue manager starts automatically on the standby server. Shared storage is used for the queue manager data, so either instance is able to access the data. Normally this shared storage will be HA storage to overcome failure situations. A file lock is used on the storage to determine which of the two possible instances should be active. This HA option does not provide any IP or hostname load balancing, and therefore it is the responsibility of the MQ client to understand the two possible locations where the MQ instance could be running.
Replicated data queue manager: This is a relatively recent addition to the MQ Advanced product (introduced in 9.0.4) and is based on technology use in the MQ Appliance. In the simplest of situations it will include three standard RedHat Linux servers, one being active handling requests, with the other two replicating the data, and waiting to become active in a similar logical manner to the standby instance within a Multi-Instance Queue Manager. When comparing to the other options there are three aspects to highlight:
No shared disk: each individual server includes a complete replica of the Queue Manager data, removing the need for a share disk, which simplifies the configuration, and potentially improves the performance of the overall solution.
Floating IP: with Multi-Instance Queue Managers, two servers, with separate IP addresses could be hosting and running the active instance of the Queue Manager. This means that the client needs to be explicitly aware of the two IP addresses. With the new replicated data queue manager a floating IP address is used, and associated with the server that is running the active queue manager. This means that clients only need to be aware of the floating IP address, simplifying the logic on the client.
When thinking about the options for IBM MQ high availability on a cloud platform, initially you may consider the following:
Single Resilient Queue Manager: this is where you have a single instance of a queue manager, and the cloud environment monitors it and replaces the VM or container as necessary. From a container point of view this requires persistent storage, which is likely to be HA storage to overcome a host machine failure. Container failover times are typically fast, and comparable to a multi-instance queue manager, while those for other technologies varies. Client connections will be routed to the active instance by an external load balancing mechanism, which removes the need for the client to provide this load balancing. Some examples of these technologies include: Docker, Kuberenetes, VMWare HA / SRM and AWS Autoscaling.
Multi-instance queue manager: This is the “traditional” out of the box, high availability provided by IBM MQ. Multi-instance queue managers are instances of the same queue manager configured on different servers. One instance of the queue manager is defined as the active instance and another instance is defined as the standby instance. If the active instance fails, the multi-instance queue manager starts automatically on the standby server. Shared storage is used for the queue manager data, so either instance is able to access the data. Normally this shared storage will be HA storage to overcome failure situations. A file lock is used on the storage to determine which of the two possible instances should be active. This HA option does not provide any IP or hostname load balancing, and therefore it is the responsibility of the MQ client to understand the two possible locations where the MQ instance could be running.
Replicated data queue manager: This is a relatively recent addition to the MQ Advanced product (introduced in 9.0.4) and is based on technology use in the MQ Appliance. In the simplest of situations it will include three standard RedHat Linux servers, one being active handling requests, with the other two replicating the data, and waiting to become active in a similar logical manner to the standby instance within a Multi-Instance Queue Manager. When comparing to the other options there are three aspects to highlight:
No shared disk: each individual server includes a complete replica of the Queue Manager data, removing the need for a share disk, which simplifies the configuration, and potentially improves the performance of the overall solution.
Floating IP: with Multi-Instance Queue Managers, two servers, with separate IP addresses could be hosting and running the active instance of the Queue Manager. This means that the client needs to be explicitly aware of the two IP addresses. With the new replicated data queue manager a floating IP address is used, and associated with the server that is running the active queue manager. This means that clients only need to be aware of the floating IP address, simplifying the logic on the client.
In a container environment, a degree of high availability is provided automatically by docker and container orchestration. A container can be automatically restarted in the case of a failure, or if the container is detected to be unhealthy. In this case the container is terminated and destroyed, and a new container created and started, that will logically take over the function of the previous container. The container orchestration platform will provide an IP Gateway to allow the routing of requests to the location of the active container. Any data that the container needs to persist will be stored on shared storage, so any running container can attach (the platform assures that only one container will be running at any one time).
It is important to separately consider the availability to PUT a message, from GETting (retrieving) a message. Many use cases are wanting to assure an application is able to offload work (PUT a message), and therefore has a higher level of availability compared to the ability to GET (retrieve) the message. We call this higher level of availability for offloading work, continuous availability. Here we create at least two instances of MQ which both have queues that can receive an applications message. Routing of inbound connections to store messages is then provided across the available MQ instances. We will discuss later the options for this routing. For applications retrieving messages, these connections will be directly to the MQ instances, instead of using the routing capability across MQ instances. This is due to the need to connect to the MQ instance hosting the individual queue to retrieve messages.
As you may have noticed on the previous slide, we have also completed horizontal scaling of the queue managers, we can continue this scaling with 3, 4, … instances of queue managers to support the level of demand required, using the previous pattern.
In the previous charts we included the term “Connection Routing”, this provides the ability for the application to have its MQ traffic directed to one of multiple possible network destinations. There are three typical options:
Static Routing - The simplest way of removing the restriction of a connection to a single address is to use a comma-separated list of host names and their ports. This list can be set in connection factories or in the MQSERVER environment variable. This approach can be used simply to find a single queue manager that might be running at one of multiple possible network locations, or distribute traffic across multiple queue managers that offer the same queue. There are several considerations with this technique including:
The further down the list the client must go, the longer the connection takes, so the list should be ordered with the most likely available locations first.
The first available queue manager in the list receives all the connections, not making it a good match for workload balancing. It is possible to provide lists in different orders to different applications to have some level of balancing of connections, but there are better mechanisms.
The configuration is brittle, in that changes need to occur to the individual application configuration if and when servers are moved.
Client Connection Definition Table – The CCDT is a file that determines the connections information used by a client to connect to a Queue Manager. A CCDT file can contain multiple entries, for a single logical connection, allowing it to distribute traffic across a number of queue managers. The CCDT can be configured so that channels in a group are either sequentially tried (for availability), or randomly tried based on specified weightings (for workload balancing). This provides greater flexibility than defining the connection information statically. The CCDT file can also be retrieved from a central location using HTTP or FTP, meaning updates can occur once and picked up by all applications. This central management provides a key advance over the static routing mechanism.
Load Balancer – the load balancer could be provided by the container orchestration engine, or be a standard network load balancer such as an F5. Regardless it can provide TCP load balancing in front of the IBM MQ Queue Managers. This approach removes the application from making the connection selection, and instead will forward onto the load balancer IP and port. Often load balancers have the capability for more feature rich workload management strategies, that may make it more appealing than CCDT. Although there are certain capabilities such as JMS that are not recommended with an external load balancer, so these need to be considered when selecting an effective strategy.