Session Type : Breakout Session
Date/Time : Thu, 26-Feb, 10:30 AM-11:30 AM
Venue : Mandalay Bay
Room : Surf Ballroom E
Descriptions:
Active-Active is the target model of modern data center, its successfully adoption includes not only the mainframe, but also the heterogeneous and periphery distributed platforms which makes it much complex to implement. Data synchronization is the heart in the various technologies of active-active, which messaging technology been chose in its implementation.
This session gives an overview of active-active technologies on both z and distributed platforms; highlight how does the Active-Active gives the benefits of both high availability and workload balancing, we also discuss China customer cases to implement messaging based active-active.
Hhm 3474 mq messaging technologies and support for high availability and acti...Pete Siddall
The document discusses concepts of business continuity including high availability, continuous serviceability, and continuous availability across sites. It then discusses how messaging technologies like IBM MQ can provide various levels of business continuity. Specifically, it provides examples of how MQ can enable active-active configurations across multiple sites for continuous availability through data synchronization and workload distribution. This allows no downtime even during planned or unplanned events.
The Bridge to Cloud (Peter Gustafsson, Confluent) London 2019 Confluent Strea...confluent
The document discusses building a bridge from on-premise Kafka installations to Confluent Cloud. It describes Confluent Replicator, which can replicate data between on-premise and cloud Kafka clusters while protecting business-critical data. Various deployment considerations for Replicator are discussed, including placement, connectivity options, and handling bi-directional replication. Confluent Cloud is introduced as a fully managed Kafka service that reduces operational burden and allows customers to focus on high-value code.
Digital Transformation: Highly Resilient Streaming Architecture and StrategiesHostedbyConfluent
Failure is inevitable in any distributed system but anticipating failures and building systems to recover from failures instantaneous makes the system highly resilient. At Capital One we process billions of events everyday and we leverage cloud, microservices, streaming and machine learning technologies to solve customer problems and provide the best customer experience.
As part of this session I will be talking about highly resilient streaming architecture that is supporting processing of billions of events every day then some of the strategies & best practices to build highly available and fault-tolerant systems utilizing Kafka and Cloud environments.
Introduction to Stream Processing with Apache Flink (2019-11-02 Bengaluru Mee...Timo Walther
Apache Flink is a distributed, stateful stream processor. It features exactly-once state consistency, sophisticated event-time support, high throughput and low latency processing, and APIs at different levels of abstraction (Java, Scala, SQL). In my talk, I'll give an introduction to Apache Flink, its features and discuss the use cases it solves. I'll explain why batch is just a special case of stream processing, how its community evolves Flink into a truly unified stream and batch processor and what this means for its users.
https://www.meetup.com/de-DE/Bangalore-Apache-Kafka-Group/events/265285812/
https://www.youtube.com/watch?v=Ych5bbmDIoA&list=PLvkUPePDi9sa27SG9eGNXH25cfUeo_WY9&index=2
Iib v10 performance problem determination examplesMartinRoss_IBM
This document discusses tools and techniques for analyzing system performance and throughput issues in IBM Integration Bus V10. It provides an overview of the Integration Bus architecture and components. It then describes various tools for monitoring resources, workload generation, and analyzing performance at the operating system, component, and message flow levels. These include tools like Process Explorer, WebUI statistics, MQ Explorer, and Java Healthcenter. The document concludes with an agenda to demonstrate analyzing two types of performance problems using these tools.
How-to Automate Application Security & Keep Up with Modern CI/CDBen Kohl
The adoption of continuous integration and delivery has fundamentally altered how software is built and maintained. It has dramatically increased the pace of software release cycles and driven innovation throughout the software industry. However, security hasn’t been able to keep up and, as a result, has largely been left behind.
Traditional security approaches all rely on manual configuration and tuning. Yet, manual processes can neither provide comprehensive nor precise security. No matter how diligent, you cannot create security policies for vulnerabilities you are not aware of. Furthermore, as release cycles shrink to weekly (or even daily) it’s simply not possible to update traditional security policies fast enough to avoid overwhelming staff with false positives. Hence, we believe the key to securing modern applications is automation.
This webinar will cover:
Elements of Modern CI/CD
Traditional Security Approaches
How Modern CI/CD Undermines Traditional Security
Approaches to Automating Security
ShiftLeft Demo
Hhm 3474 mq messaging technologies and support for high availability and acti...Pete Siddall
The document discusses concepts of business continuity including high availability, continuous serviceability, and continuous availability across sites. It then discusses how messaging technologies like IBM MQ can provide various levels of business continuity. Specifically, it provides examples of how MQ can enable active-active configurations across multiple sites for continuous availability through data synchronization and workload distribution. This allows no downtime even during planned or unplanned events.
The Bridge to Cloud (Peter Gustafsson, Confluent) London 2019 Confluent Strea...confluent
The document discusses building a bridge from on-premise Kafka installations to Confluent Cloud. It describes Confluent Replicator, which can replicate data between on-premise and cloud Kafka clusters while protecting business-critical data. Various deployment considerations for Replicator are discussed, including placement, connectivity options, and handling bi-directional replication. Confluent Cloud is introduced as a fully managed Kafka service that reduces operational burden and allows customers to focus on high-value code.
Digital Transformation: Highly Resilient Streaming Architecture and StrategiesHostedbyConfluent
Failure is inevitable in any distributed system but anticipating failures and building systems to recover from failures instantaneous makes the system highly resilient. At Capital One we process billions of events everyday and we leverage cloud, microservices, streaming and machine learning technologies to solve customer problems and provide the best customer experience.
As part of this session I will be talking about highly resilient streaming architecture that is supporting processing of billions of events every day then some of the strategies & best practices to build highly available and fault-tolerant systems utilizing Kafka and Cloud environments.
Introduction to Stream Processing with Apache Flink (2019-11-02 Bengaluru Mee...Timo Walther
Apache Flink is a distributed, stateful stream processor. It features exactly-once state consistency, sophisticated event-time support, high throughput and low latency processing, and APIs at different levels of abstraction (Java, Scala, SQL). In my talk, I'll give an introduction to Apache Flink, its features and discuss the use cases it solves. I'll explain why batch is just a special case of stream processing, how its community evolves Flink into a truly unified stream and batch processor and what this means for its users.
https://www.meetup.com/de-DE/Bangalore-Apache-Kafka-Group/events/265285812/
https://www.youtube.com/watch?v=Ych5bbmDIoA&list=PLvkUPePDi9sa27SG9eGNXH25cfUeo_WY9&index=2
Iib v10 performance problem determination examplesMartinRoss_IBM
This document discusses tools and techniques for analyzing system performance and throughput issues in IBM Integration Bus V10. It provides an overview of the Integration Bus architecture and components. It then describes various tools for monitoring resources, workload generation, and analyzing performance at the operating system, component, and message flow levels. These include tools like Process Explorer, WebUI statistics, MQ Explorer, and Java Healthcenter. The document concludes with an agenda to demonstrate analyzing two types of performance problems using these tools.
How-to Automate Application Security & Keep Up with Modern CI/CDBen Kohl
The adoption of continuous integration and delivery has fundamentally altered how software is built and maintained. It has dramatically increased the pace of software release cycles and driven innovation throughout the software industry. However, security hasn’t been able to keep up and, as a result, has largely been left behind.
Traditional security approaches all rely on manual configuration and tuning. Yet, manual processes can neither provide comprehensive nor precise security. No matter how diligent, you cannot create security policies for vulnerabilities you are not aware of. Furthermore, as release cycles shrink to weekly (or even daily) it’s simply not possible to update traditional security policies fast enough to avoid overwhelming staff with false positives. Hence, we believe the key to securing modern applications is automation.
This webinar will cover:
Elements of Modern CI/CD
Traditional Security Approaches
How Modern CI/CD Undermines Traditional Security
Approaches to Automating Security
ShiftLeft Demo
The document provides an overview of NetBackup and Veritas 360 data management solutions. It discusses challenges with current data management including complex cloud migrations, fragmented protection and rising storage costs. Veritas' approach provides unified data protection across cloud, physical and virtual environments. Key solutions highlighted include NetBackup for data protection, Information Map for data visibility, and Resiliency Platform for business continuity.
How Zillow Unlocked Kafka to 50 Teams in 8 months | Shahar Cizer Kobrinsky, Z...HostedbyConfluent
1. Zillow transitioned from using multiple messaging systems and data pipelines to using Kafka as their single streaming platform to unify their data infrastructure.
2. They took a bottom-up approach to gain trust from teams by publishing service level objectives, onboarding non-critical streams quickly, and meeting developers where they were with tools like Terraform.
3. An important lesson was to treat the platform as a product by providing documentation, libraries, and blog posts to make it easy for developers to use.
IBM Integration Bus provides tools and features to help with integration development and administration. This presentation discusses tools for developers like the Integration Toolkit and API, as well as best practices for administrators around tasks like deployment, monitoring, and disaster recovery. It also covers how applications, libraries, and patterns can aid management of integration solutions.
- The document discusses NetBackup SAN Client and Fibre Transport which provides high-speed backups and restores over a SAN. SAN Client uses SCSI protocol and has a smaller footprint than SAN Media Server. It also discusses NetBackup appliances which are purpose-built backup appliances versus building your own solution. Appliances offer advantages like simplified management, security, and monitoring.
SingleStore & Kafka: Better Together to Power Modern Real-Time Data Architect...HostedbyConfluent
To remain competitive, organizations need to democratize access to fast analytics, not only to gain real-time insights on their business but also to power smart apps that need to react in the moment. In this session, you will learn how Kafka and SingleStore enable modern, yet simple data architecture to analyze both fast paced incoming data as well as large historical datasets. In particular, you will understand why SingleStore is well suited process data streams coming from Kafka.
Introducing Events and Stream Processing into Nationwide Building Society (Ro...confluent
Facing Open Banking regulation, rapidly increasing transaction volumes and increasing customer expectations, Nationwide took the decision to take load off their back-end systems through real-time streaming of data changes into Kafka. Hear about how Nationwide started their journey with Kafka, from their initial use case of creating a real-time data cache using Change Data Capture, Kafka and Microservices to how Kafka allowed them to build a stream processing backbone used to reengineer the entire banking experience including online banking, payment processing and mortgage applications. See a working demo of the system and what happens to the system when the underlying infrastructure breaks. Technologies covered include: Change Data Capture, Kafka (Avro, partitioning and replication) and using KSQL and Kafka Streams Framework to join topics and process data.
Die pacman nomaden opnfv summit 2016 berlinZhipeng Huang
The document discusses accelerating applications in OpenStack using dedicated hardware such as FPGAs and GPUs. It proposes extending OpenStack components like Nova, Glance, and Neutron to manage the lifecycle of acceleration resources similarly to managing VMs and bare metal servers. Nova would allocate and track acceleration devices. Glance would manage the firmware and configurations loaded onto devices. Neutron and Cinder would provide interfaces to acceleration appliances without needing to understand the underlying hardware implementation. This would allow accelerated functions to be integrated and consumed like other PaaS/SaaS services, completing the puzzle to deploy and manage network functions virtualization using acceleration in OpenStack.
IBM Integration Bus is a software product that provides integration capabilities for connecting applications, services, systems and devices. It uses a graphical interface to create reusable message flows that can transform and route messages between different platforms and data formats. The product provides extensive connectivity options, scalability, reliability and tools for development, testing and administration. A new IBM Integration Bus on Cloud service is also available, which provides a fully managed integration platform hosted in the cloud.
A service-oriented architecture looks great as boxes and lines on a whiteboard, but what is it like in real life? Are the benefits of flexibility worth the overhead of administration? We've built a framework on top of Finagle that enables a simple approach to building and deploying a microservice with SBT and Scala.
How Much Can You Connect? | Bhavesh Raheja, Disney + HotstarHostedbyConfluent
How many connects can you run in a single cluster? Disney + Hotstar runs over 10 different connect clusters with over 2000+ connectors. In this talk, we share our experience of running Kafka connect at scale. We will walk through our decisions of using one cluster vs many and how the improvements in the connect ecosystem like incremental rebalancing have allowed us to scale to thousands of connects. We will also discuss challenges with scaling up & down connect workers while keeping the ecosystem stable & present a wishlist of the missing features in this distributed task framework.
Having the ability to analyze why a particular process in OTM did not output the desired results dramatically increases the value of your OTM team and their overall productivity. Understanding the detailed content provided within Explanations, Logs, and Diagnostics will allow your users to become super users of their own domains.
2013 OTM EU SIG evolv applications Data ManagementMavenWire
This document discusses the history of Oracle Transportation Management (OTM) implementation processes in Europe and outlines best practices for data management and user access management. It describes how early OTM implementations relied on individual efforts which led to inconsistencies. As the user base grew, common tools and processes were developed but still varied between projects. The document advocates defining standardized practices to improve consistency, supportability and efficiency across implementations. It provides recommendations for best practices in loading reference data, managing data changes over time, and provisioning user access roles and privileges in a centralized manner.
Help, My Kafka is Broken! (Emma Humber & Gantigmaa Selenge, IBM) Kafka Summit...HostedbyConfluent
While Apache Kafka is designed to be fault-tolerant, there will be times when your Kafka environment just isn’t working as expected.
Whether it’s a newly configured application not processing messages, or an outage in a high-load, mission-critical production environment, it’s crucial to get up and running as quickly and safely as possible.
IBM has hosted production Kafka environments for several years and has in-depth knowledge of how to diagnose and resolve problems rapidly and accurately to ensure minimal impact to end users.
This session will discuss our experiences of how to most effectively collect and understand Kafka diagnostics. We’ll talk through using these diagnostics to work out what’s gone wrong, and how to recover from a system outage. Using this new-found knowledge, you will be equipped to handle any problem your cluster throws at you.
Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to Streamin...HostedbyConfluent
This document summarizes Activision Data's transition from a batch data pipeline to a real-time streaming data pipeline using Apache Kafka and Kafka Streams. Some key points:
- The new pipeline ingests, processes, and stores game telemetry data from over 200k messages per second and over 5PB of data across 9 years of games.
- Kafka Streams is used to transform the raw streaming data through multiple microservices with low 10-second end-to-end latency, compared to 6-24 hours previously.
- Kafka Connect integrates the streaming data with data stores like AWS S3, Cassandra, and Elasticsearch.
- The new pipeline provides real-time and historical access to structured
Designing and Implementing Information Systems with Event Modeling, Bobby Cal...confluent
Designing and Implementing Information Systems with Event Modeling, Bobby Calderwood, Founder at Evident Systems
https://www.meetup.com/Saint-Louis-Kafka-meetup-group/events/273869005/
This document discusses different cloud computing layers (IaaS, PaaS, SaaS) and how IBM Integration Bus can integrate with them. It describes how tools like Chef, IBM UrbanCode Deploy, and Bluemix PaaS can be used to automate deployment and management of IIB in cloud environments. The document also discusses how IIB can connect to SaaS applications and provide APIs to expose integration services as cloud applications.
Launching the Expedia Conversations Platform: From Zero to Production in Four...HostedbyConfluent
When we set out to launch our Conversations Platform at Expedia Group our goals were simple. Enable millions of travelers to have natural language conversations with an automated agent via text, Facebook, or their channel of choice. Let them book trips, make changes or cancellations, and ask questions -- “How long is my layover?” “Does my hotel have a pool?” How much will I get charged if I want to bring my golf clubs?”. Then take all that we know about that customer across all of our brands and apply machine learning models to give customers what they are looking for immediately and automatically, whether it be a straightforward answer or a complex new itinerary. And the final goal: go from zero to production in four months.
Such a platform is no place for batch jobs, back-end processing, or offline APIs. To quickly make decisions that incorporate contextual information, the platform needs data in near real-time and it needs it from a wide range of services and systems. Meeting these needs meant architecting the Conversations Platform around a central nervous system based on Confluent Cloud and Apache Kafka. Kafka made it possible to orchestrate data from loosely coupled systems, enrich data as it flows between them so that by the time it reaches its destination it is ready to be acted upon, and surface aggregated data for analytics and reporting. Confluent Cloud made it possible for us to meet our tight launch deadline with limited resources. With event streaming as a managed service, we had no costly new hires to maintain our clusters and no worries about 24x7 reliability.
When we built the platform, we did not foresee the worldwide pandemic and the profound effect it would have on the travel industry. Companies were hit with a tidal wave of customer questions, cancellations, and rebookings. Throughout this once-in-a-lifetime event, the Conversations Platform proved up to the challenge, auto-scaling as necessary and taking much of the load off of live agents.
In this session, we’ll share how we built and deployed the Conversations Platform in just four months, the lessons we learned along the way, key points to consider for anyone architecting a platform with similar requirements, and how it handled the unprecedented demands placed upon it by the pandemic. We’ll also show a demo of the platform that includes high-level insights obtained from analytics and a visualization of the low-level events that make up a conversation.
Interconnect session 3498: Deployment Topologies for Jazz Reporting ServiceRosa Naranjo
The document provides an overview of the Jazz Reporting Service architecture and deployment topologies. It discusses the key components of JRS including the Data Collection Component, Data Warehouse, Lifecycle Query Engine, and Report Builder. It then describes example deployment topologies such as departmental, enterprise, and federated models. The document outlines two major phases in reporting - data collection and report execution. It discusses factors that affect the performance of each phase and provides strategies for handling large data volumes and high user loads.
Websphere MQ is IBM's middleware for messaging and queuing that allows applications on distributed systems to communicate. It has a consistent API across platforms and current version is 7.0. Previously known as MQSeries, it was rebranded to Websphere MQ in 2002. Messaging involves program-to-program communication between systems using message queues. MQ defines different queue types for specific purposes that applications can use to exchange messages.
This document outlines a 16-session training course on IBM WebSphere MQ. The sessions cover MQ architecture, messaging concepts, objects like queues and channels, distributed queue management, triggering, commands, dead letter queues, utilities, installation, configuration files, clusters, multi-instance usage, and protocols. Hands-on exercises are included to demonstrate topics like one-way and two-way communication, triggering, dead letter queue processing, and how clusters work.
The document provides an overview of NetBackup and Veritas 360 data management solutions. It discusses challenges with current data management including complex cloud migrations, fragmented protection and rising storage costs. Veritas' approach provides unified data protection across cloud, physical and virtual environments. Key solutions highlighted include NetBackup for data protection, Information Map for data visibility, and Resiliency Platform for business continuity.
How Zillow Unlocked Kafka to 50 Teams in 8 months | Shahar Cizer Kobrinsky, Z...HostedbyConfluent
1. Zillow transitioned from using multiple messaging systems and data pipelines to using Kafka as their single streaming platform to unify their data infrastructure.
2. They took a bottom-up approach to gain trust from teams by publishing service level objectives, onboarding non-critical streams quickly, and meeting developers where they were with tools like Terraform.
3. An important lesson was to treat the platform as a product by providing documentation, libraries, and blog posts to make it easy for developers to use.
IBM Integration Bus provides tools and features to help with integration development and administration. This presentation discusses tools for developers like the Integration Toolkit and API, as well as best practices for administrators around tasks like deployment, monitoring, and disaster recovery. It also covers how applications, libraries, and patterns can aid management of integration solutions.
- The document discusses NetBackup SAN Client and Fibre Transport which provides high-speed backups and restores over a SAN. SAN Client uses SCSI protocol and has a smaller footprint than SAN Media Server. It also discusses NetBackup appliances which are purpose-built backup appliances versus building your own solution. Appliances offer advantages like simplified management, security, and monitoring.
SingleStore & Kafka: Better Together to Power Modern Real-Time Data Architect...HostedbyConfluent
To remain competitive, organizations need to democratize access to fast analytics, not only to gain real-time insights on their business but also to power smart apps that need to react in the moment. In this session, you will learn how Kafka and SingleStore enable modern, yet simple data architecture to analyze both fast paced incoming data as well as large historical datasets. In particular, you will understand why SingleStore is well suited process data streams coming from Kafka.
Introducing Events and Stream Processing into Nationwide Building Society (Ro...confluent
Facing Open Banking regulation, rapidly increasing transaction volumes and increasing customer expectations, Nationwide took the decision to take load off their back-end systems through real-time streaming of data changes into Kafka. Hear about how Nationwide started their journey with Kafka, from their initial use case of creating a real-time data cache using Change Data Capture, Kafka and Microservices to how Kafka allowed them to build a stream processing backbone used to reengineer the entire banking experience including online banking, payment processing and mortgage applications. See a working demo of the system and what happens to the system when the underlying infrastructure breaks. Technologies covered include: Change Data Capture, Kafka (Avro, partitioning and replication) and using KSQL and Kafka Streams Framework to join topics and process data.
Die pacman nomaden opnfv summit 2016 berlinZhipeng Huang
The document discusses accelerating applications in OpenStack using dedicated hardware such as FPGAs and GPUs. It proposes extending OpenStack components like Nova, Glance, and Neutron to manage the lifecycle of acceleration resources similarly to managing VMs and bare metal servers. Nova would allocate and track acceleration devices. Glance would manage the firmware and configurations loaded onto devices. Neutron and Cinder would provide interfaces to acceleration appliances without needing to understand the underlying hardware implementation. This would allow accelerated functions to be integrated and consumed like other PaaS/SaaS services, completing the puzzle to deploy and manage network functions virtualization using acceleration in OpenStack.
IBM Integration Bus is a software product that provides integration capabilities for connecting applications, services, systems and devices. It uses a graphical interface to create reusable message flows that can transform and route messages between different platforms and data formats. The product provides extensive connectivity options, scalability, reliability and tools for development, testing and administration. A new IBM Integration Bus on Cloud service is also available, which provides a fully managed integration platform hosted in the cloud.
A service-oriented architecture looks great as boxes and lines on a whiteboard, but what is it like in real life? Are the benefits of flexibility worth the overhead of administration? We've built a framework on top of Finagle that enables a simple approach to building and deploying a microservice with SBT and Scala.
How Much Can You Connect? | Bhavesh Raheja, Disney + HotstarHostedbyConfluent
How many connects can you run in a single cluster? Disney + Hotstar runs over 10 different connect clusters with over 2000+ connectors. In this talk, we share our experience of running Kafka connect at scale. We will walk through our decisions of using one cluster vs many and how the improvements in the connect ecosystem like incremental rebalancing have allowed us to scale to thousands of connects. We will also discuss challenges with scaling up & down connect workers while keeping the ecosystem stable & present a wishlist of the missing features in this distributed task framework.
Having the ability to analyze why a particular process in OTM did not output the desired results dramatically increases the value of your OTM team and their overall productivity. Understanding the detailed content provided within Explanations, Logs, and Diagnostics will allow your users to become super users of their own domains.
2013 OTM EU SIG evolv applications Data ManagementMavenWire
This document discusses the history of Oracle Transportation Management (OTM) implementation processes in Europe and outlines best practices for data management and user access management. It describes how early OTM implementations relied on individual efforts which led to inconsistencies. As the user base grew, common tools and processes were developed but still varied between projects. The document advocates defining standardized practices to improve consistency, supportability and efficiency across implementations. It provides recommendations for best practices in loading reference data, managing data changes over time, and provisioning user access roles and privileges in a centralized manner.
Help, My Kafka is Broken! (Emma Humber & Gantigmaa Selenge, IBM) Kafka Summit...HostedbyConfluent
While Apache Kafka is designed to be fault-tolerant, there will be times when your Kafka environment just isn’t working as expected.
Whether it’s a newly configured application not processing messages, or an outage in a high-load, mission-critical production environment, it’s crucial to get up and running as quickly and safely as possible.
IBM has hosted production Kafka environments for several years and has in-depth knowledge of how to diagnose and resolve problems rapidly and accurately to ensure minimal impact to end users.
This session will discuss our experiences of how to most effectively collect and understand Kafka diagnostics. We’ll talk through using these diagnostics to work out what’s gone wrong, and how to recover from a system outage. Using this new-found knowledge, you will be equipped to handle any problem your cluster throws at you.
Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to Streamin...HostedbyConfluent
This document summarizes Activision Data's transition from a batch data pipeline to a real-time streaming data pipeline using Apache Kafka and Kafka Streams. Some key points:
- The new pipeline ingests, processes, and stores game telemetry data from over 200k messages per second and over 5PB of data across 9 years of games.
- Kafka Streams is used to transform the raw streaming data through multiple microservices with low 10-second end-to-end latency, compared to 6-24 hours previously.
- Kafka Connect integrates the streaming data with data stores like AWS S3, Cassandra, and Elasticsearch.
- The new pipeline provides real-time and historical access to structured
Designing and Implementing Information Systems with Event Modeling, Bobby Cal...confluent
Designing and Implementing Information Systems with Event Modeling, Bobby Calderwood, Founder at Evident Systems
https://www.meetup.com/Saint-Louis-Kafka-meetup-group/events/273869005/
This document discusses different cloud computing layers (IaaS, PaaS, SaaS) and how IBM Integration Bus can integrate with them. It describes how tools like Chef, IBM UrbanCode Deploy, and Bluemix PaaS can be used to automate deployment and management of IIB in cloud environments. The document also discusses how IIB can connect to SaaS applications and provide APIs to expose integration services as cloud applications.
Launching the Expedia Conversations Platform: From Zero to Production in Four...HostedbyConfluent
When we set out to launch our Conversations Platform at Expedia Group our goals were simple. Enable millions of travelers to have natural language conversations with an automated agent via text, Facebook, or their channel of choice. Let them book trips, make changes or cancellations, and ask questions -- “How long is my layover?” “Does my hotel have a pool?” How much will I get charged if I want to bring my golf clubs?”. Then take all that we know about that customer across all of our brands and apply machine learning models to give customers what they are looking for immediately and automatically, whether it be a straightforward answer or a complex new itinerary. And the final goal: go from zero to production in four months.
Such a platform is no place for batch jobs, back-end processing, or offline APIs. To quickly make decisions that incorporate contextual information, the platform needs data in near real-time and it needs it from a wide range of services and systems. Meeting these needs meant architecting the Conversations Platform around a central nervous system based on Confluent Cloud and Apache Kafka. Kafka made it possible to orchestrate data from loosely coupled systems, enrich data as it flows between them so that by the time it reaches its destination it is ready to be acted upon, and surface aggregated data for analytics and reporting. Confluent Cloud made it possible for us to meet our tight launch deadline with limited resources. With event streaming as a managed service, we had no costly new hires to maintain our clusters and no worries about 24x7 reliability.
When we built the platform, we did not foresee the worldwide pandemic and the profound effect it would have on the travel industry. Companies were hit with a tidal wave of customer questions, cancellations, and rebookings. Throughout this once-in-a-lifetime event, the Conversations Platform proved up to the challenge, auto-scaling as necessary and taking much of the load off of live agents.
In this session, we’ll share how we built and deployed the Conversations Platform in just four months, the lessons we learned along the way, key points to consider for anyone architecting a platform with similar requirements, and how it handled the unprecedented demands placed upon it by the pandemic. We’ll also show a demo of the platform that includes high-level insights obtained from analytics and a visualization of the low-level events that make up a conversation.
Interconnect session 3498: Deployment Topologies for Jazz Reporting ServiceRosa Naranjo
The document provides an overview of the Jazz Reporting Service architecture and deployment topologies. It discusses the key components of JRS including the Data Collection Component, Data Warehouse, Lifecycle Query Engine, and Report Builder. It then describes example deployment topologies such as departmental, enterprise, and federated models. The document outlines two major phases in reporting - data collection and report execution. It discusses factors that affect the performance of each phase and provides strategies for handling large data volumes and high user loads.
Websphere MQ is IBM's middleware for messaging and queuing that allows applications on distributed systems to communicate. It has a consistent API across platforms and current version is 7.0. Previously known as MQSeries, it was rebranded to Websphere MQ in 2002. Messaging involves program-to-program communication between systems using message queues. MQ defines different queue types for specific purposes that applications can use to exchange messages.
This document outlines a 16-session training course on IBM WebSphere MQ. The sessions cover MQ architecture, messaging concepts, objects like queues and channels, distributed queue management, triggering, commands, dead letter queues, utilities, installation, configuration files, clusters, multi-instance usage, and protocols. Hands-on exercises are included to demonstrate topics like one-way and two-way communication, triggering, dead letter queue processing, and how clusters work.
IBM MQ V9 provides a new optional delivery model with two streams: a long-term support stream for stability and a rapid function delivery stream. It includes features like central provisioning of client configuration, a new quality of service for Advanced Message Security called Confidentiality, and LDAP authorization support for Windows clients. Activity trace information can now be subscribed to via publish/subscribe without additional configuration.
IBM MQ - High Availability and Disaster RecoveryMarkTaylorIBM
IBM MQ provides capabilities to keep data safe and businesses running in the event of failures. This includes solutions for high availability (HA) and disaster recovery (DR) whether running on-premises or in hybrid cloud environments. HA aims to keep systems running through failures while DR focuses on recovering after an HA failure. Key HA technologies in IBM MQ include queue manager clusters, queue sharing groups, multi-instance queue managers, and HA clusters. These solutions provide redundancy to prevent single points of failure and enable fast failover. DR requires replicating data to separate sites which IBM MQ supports through various backup and replication features.
IBM MQ: Managing Workloads, Scaling and Availability with MQ ClustersDavid Ware
MQ Clustering can be used to solve many problems, from simplified administration and workload management in an MQ network, to horizontal scalability and continuous availability of messaging applications. This session will show the full range of uses of MQ Clusters to solve real problems, highlighting the underlying technology being used. A basic understanding of IBM MQ clustering would be beneficial.
Building highly available architectures with WAS and MQMatthew White
Abstract:
'This talk will look at architectures in which IBM MQ can be configured with the IBM WebSphere Application Server (and Liberty profiles) to give a highly-available scenario.
The basis be some of the scenarios that are documented in the developerWorks series "A flexible and scalable WebSphere MQ topology pattern". '
Aims:
Outline some of the technologies and features that can be used for High Availability
Consider some of the implications of technology choices
Provide references for further study
Find out what scenarios and concerns are of most interest
i.e. what should be developing next!
WebSphere MQ is a middleware tool that facilitates reliable application-to-application communication by sending and receiving messages via messaging queues. It provides a secure transport layer that moves data unchanged in the form of messages between applications across platforms. WebSphere MQ uses APIs to support programming languages like Java, C, COBOL. It differentiates between persistent and non-persistent messages to ensure reliable delivery. The queue manager maintains objects like queues, channels, and listens to ensure message flow.
IBM MQ - better application performanceMarkTaylorIBM
Presented in Feb 2015 at Interconnect
This presentation is aimed at helping application developers understand how to best use MQ features for higher performance.
Enable business continuity and high availability through active active techno...Qian Li Jin
IBM provides an overview of an active-active solution implemented by China Everbright Bank for their credit card system. The solution uses WebSphere MQ for real-time data synchronization between active sites in Beijing and Shanghai. This allows workload and data to be distributed across both sites for continuous availability in case of an outage. Key components discussed include the messaging architecture, application design considerations for performance, and procedures for planned and unplanned site switches. The implementation provides business continuity for Everbright Bank's credit card processing.
IBM MQ High Availabillity and Disaster Recovery (2017 version)MarkTaylorIBM
This document discusses high availability and disaster recovery strategies for IBM MQ. It describes technologies like queue manager clusters, multi-instance queue managers, and HA clusters that can be used to provide high availability when failures occur across datacenters and clouds. Multi-instance queue managers provide basic failover of a queue manager between two systems without an HA cluster. HA clusters coordinate failover of resources like the queue manager, shared storage, and IP address across multiple machines for increased reliability. The IBM MQ Appliance also supports high availability between two appliances.
AME-1936 : Enterprise Messaging for Next-Generation Core Bankingwangbo626
- The document discusses enterprise messaging solutions for next generation core banking systems. It addresses four key challenges: maximizing return on investment, enabling new business adoption, achieving extreme performance and scalability, and meeting other special requirements.
- For each challenge, the document outlines requirements and proposes approaches. Solutions discussed include using IBM MQ for universal connectivity, mobile push solutions based on MQTT/Messagesight, MQ deployment on cloud, and MQ performance tuning for active-active configurations. The document emphasizes balancing technical and business requirements.
IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...Peter Broadhurst
This document provides an overview of designing a scalable and highly available IBM MQ infrastructure. Key points include:
- Using a client/server architecture with MQ deployed separately from applications provides flexibility and allows MQ to be treated as critical infrastructure similar to a database.
- Each sender should connect to two queue managers and each receiver should have two listeners concurrently attached to provide redundancy and no single point of failure.
- Other topics covered include synchronous request/response, publish/subscribe messaging, limitations for ordered messages, and integrating with IBM Integration Bus.
The document emphasizes an active/active design philosophy with minimum two queue managers and discusses workload management strategies for sending and receiving messages across multiple queue managers.
Strategies For Migrating From SQL to NoSQL — The Apache Kafka WayScyllaDB
This document discusses strategies for migrating from SQL to NoSQL databases using Apache Kafka. It outlines the challenges of modernizing legacy databases, how Confluent can help with the migration process, and proposes a three-phase plan. The plan involves initially migrating data sources using connectors, then optimizing the data with stream processing in ksqlDB, and finally modernizing by sending the data to cloud databases. The document provides an overview of Confluent's technologies and services that can help accelerate and simplify the database migration.
Picking the Right Clustering for MySQL - Cloud-only Services or Flexible Tung...Continuent
As businesses head into the cloud, it is tempting to use the first product that offers to make database operation relatively simple by punching a few buttons on a menu. However, there's a big difference between firing up cloud database services, such as Amazon RDS, for testing or development and finding a real data management solution, such as Continuent Tungsten, that can handle hundreds of millions of transactions daily.
This webinar explores how your business can benefit from Continuent Tungsten, a flexible clustering solution that helps data-driven businesses handle billions of transactions daily across a wide range of environments. We'll focus on the following problems in particular:
- Ensuring fully capable cloud DBMS operation
- Avoiding lock-in by choosing solutions that run across clouds as well as on-premises
- Spreading MySQL data over regions using flexible primary/DR and multi-master topologies
- Controlling maintenance intervals and the DBMS stack directly
- Integrating in real-time to data warehouses and on-premises DBMS like Oracle
- Ensuring immediate access to top-notch, 24x7 support when things go south.
Your data is too precious to take shortcuts. Learn how you can use Continuent Tungsten to build scalable management solutions that offer the economic benefits of the cloud with the enterprise capabilities required by businesses that live and die by their data.
Software Architecture for Cloud InfrastructureTapio Rautonen
The document discusses software architecture principles for cloud infrastructure, including microservices, distributed computing fallacies, designing for failure, and new design patterns like cache-aside, circuit breaker, and event sourcing. It also covers topics like autoscaling, asynchronous messaging, reactive streams, configuration management, and challenges like software erosion and failures cascading in distributed systems. The overall message is that building distributed systems on cloud infrastructure requires adopting new architectural patterns to deal with failures and improve scalability, performance and resilience.
Modernizing your Application Architecture with Microservicesconfluent
Organizations are quickly adopting microservice architectures to achieve better customer service and improve user experience while limiting downtime and data loss. However, transitioning from a monolithic architecture based on stateful databases to truly stateless microservices can be challenging and requires the right set of solutions.
In this webinar, learn from field experts as they discuss how to convert the data locked in traditional databases into event streams using HVR and Apache Kafka®. They will show you how to implement these solutions through a real-world demo use case of microservice adoption.
You will learn:
-How log-based change data capture (CDC) converts database tables into event streams
-How Kafka serves as the central nervous system for microservices
-How the transition to microservices can be realized without throwing away your legacy infrastructure
This document provides an overview of cloud computing and testing in the cloud. It discusses key aspects of cloud computing including pay-per-use models, virtual server pools, and various cloud deployment models. It then covers cloud service level agreements and their technical and commercial terms. The document outlines different strategies for testing in the cloud including automation, functional testing, and monitoring. It also discusses challenges like security and reliability and how defects are tracked. Overall the document is providing guidance on testing applications and infrastructure deployed in cloud environments.
Achieving scale and performance using cloud native environmentRakuten Group, Inc.
ID Platform Product can be used by every Rakuten Group Companies and can easily serve millions of users. Multi-Region product challenges are many, example:
- Ensure 4 9’s availability
- Management across each region
- Alerting and Monitoring across each region
- Auto scaling (Scale up and Scale down) across each region
- Performance (vertical scale up)
- Cost
- DB Consistency Across Multiple Regions
- Resiliency
At Ecosystem Platform Layer for Rakuten, we handle each of these and this presentation is about how we handle these challenging scenarios.
The document discusses cloud computing and coordination of cloud applications using ZooKeeper. It provides an overview of challenges for cloud computing, architectural styles like client-server and REST, and workflows involving coordination of multiple activities. It then describes ZooKeeper as a distributed coordination service that implements consensus using Paxos. ZooKeeper provides reliable coordination through a replicated database, atomic broadcasts, and guarantees like sequential consistency.
Improve Customer Experience with Multi CDN SolutionCloudxchange.io
1) Intelligently balancing content delivery among multiple clouds and CDNs using Cedexis' technology can help approach 100% availability by routing around outages.
2) Cedexis' real-user monitoring data and intelligent routing capabilities allow enterprises to control traffic across multiple CDNs and clouds to improve performance and reduce costs.
3) Cedexis helps customers implement hybrid CDN strategies using their own infrastructure like data centers combined with multiple third-party CDNs to gain control and performance benefits while reducing CDN spend.
Accelerating Public Cloud Migration with Multi-Cloud Load BalancingAvi Networks
Watch webinar on-demand https://info.avinetworks.com/webinars/accelerating-public-cloud-migration
Avi Networks, now part of VMware, has helped many enterprises speed up their migration to the public clouds including Azure, AWS and Google Cloud. Learn why customers, especially those with a multi-cloud strategy, are choosing VMware NSX Advanced Load Balancer (by Avi Networks) to drive end-to-end automation, simpler operations and deeper visibility for load balancing and web application firewall.
In this webinar, we will cover:
- An insight into modern software load balancer architecture
- A step-by-step guide to migrating applications to public clouds
- Best practices for automating enterprise-grade load balancing
- On-demand, elastic autoscaling of applications based on real-time analytics
This document discusses QLogic's FabricCache 10000 Series Adapters. It provides an overview of the company and new acquisitions. It then describes how the adapters consolidate fibre channel and caching functionality. The adapters simplify storage with transparent caching and allow cache to be shared across clustered servers. This improves performance for applications like virtual desktop infrastructure, databases and collaboration tools. It provides an example of an Oracle RAC cluster seeing 82% faster response times with caching.
The document provides an overview of modern cloud application architecture compared to traditional on-premises architecture. It discusses key aspects of the software development process including analyzing business requirements, choosing an architecture style, and applying design patterns and best practices. Various architecture styles are described such as microservices, event-driven, and big data architectures. Common cloud design patterns are also discussed.
IBM WebSphere MQ: Managing Workloads, Scaling and Availability with MQ ClustersDavid Ware
IBM WebSphere MQ Clustering can be used to solve many problems, from simplified administration and workload management in an MQ network, to horizontal scalability and continuous availability of messaging applications. This session will show the full range of uses of MQ Clusters to solve real problems, highlighting the underlying technology being used.
This has been superseded by http://www.slideshare.net/DavidWare1/ame-2273-mq-clustering-pdf
Istio as an enabler for migrating to microservices (edition 2022)Ahmed Misbah
This session is targeted towards teams and organizations considering to migrate their applications from monolithic to Microservice architecture by proposing Istio as an enabler. Istio is an implementation of service mesh, a technology useful for migrating to Microservices iteratively and safely.
Migrating application architectures to Microservices is considered a key area of transformation in the IT world. Modernizing legacy applications to Kubernetes-based Microservices can prove to be very challenging if not planned correctly, taking into consideration the right technologies and enablers.
This session explains how Istio can be used as a bridge and enabler for modernizing legacy monolithic applications to Microservices. Topics covered in the session will include:
1- Advantages of migrating to Microservices and service mesh .
2- Designing a Microservice application based on splitting an existing monolithic application.
3- Implementing Microservices iteratively as a strangler fig application with Istio.
4- Features Istio provides as a service mesh platform.
Migration to cloud is no easy task. Start small and learn the core technologies before leveraging the advanced features of the cloud. The cultural change will affect the whole organization from development to business management and sales.
Cloud native applications are the future of software. Modern software is stateless, provided from cloud to heterogeneous clients on demand and designed to be scalable and resilient.
Hybrid Cloud Transformation Fast Track.pptxzhunli4
This document discusses hybrid cloud and provides key considerations for a successful hybrid cloud deployment. It notes that hybrid cloud can span infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS). Automation across clouds, building cloud-like services on-premises, and allowing applications to easily deploy on either private or public clouds are identified as important factors. Planning application deployments carefully to consider where it is cheapest to run while balancing business agility is also emphasized.
Similar to AME-1934 : Enable Active-Active Messaging Technology to Extend Workload Balancing and High Availability (20)
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
AI-Powered Food Delivery Transforming App Development in Saudi Arabia.pdfTechgropse Pvt.Ltd.
In this blog post, we'll delve into the intersection of AI and app development in Saudi Arabia, focusing on the food delivery sector. We'll explore how AI is revolutionizing the way Saudi consumers order food, how restaurants manage their operations, and how delivery partners navigate the bustling streets of cities like Riyadh, Jeddah, and Dammam. Through real-world case studies, we'll showcase how leading Saudi food delivery apps are leveraging AI to redefine convenience, personalization, and efficiency.
2. Agenda
• Concepts of Business Continuity
• Business Continuity
• High Availability
• Continuous Serviceability
• Continuous Availability Cross Sites
• Messaging Technologies for Business Continuity
• Cases Sharing
1
3. What does business continuity mean to you?
• Why we need to have a business continuity plan (BCP)?
• Don’t panic in the event of disaster crisis
• What we need to consider when preparing a BCP?
• "backups" and their locations
• a central command center, which we call it as "Crisis
Management Team (CMT)" in IBMManagement Team (CMT)" in IBM
• maintain a "contact list“
• think about all possible "scenarios" and their corresponding
action plans
• consider "critical" information or applications first
2
4. Different levels of business continuity
• Enterprise Business Requires Business Continuity
Standby Active-Active
0. Disaster 1. High-Availability 2. Continuous 3. Continuous
Recovery
• Restore the
business after
a disaster
• Meet Service
Availability objectives
e.g., 99.9% availability
or no more than 8
hours of down-time a
year for maintenance
and failures
serviceability
• No downtime
within one data
center (planned
or not)
3
Availability cross
sites
• No downtime ever
(planned or not)
5. BC Level 1 – High Availability
• HA at different levels (AIX example)
• Apps follow HA principles
• Middleware HA technologies
– Clustering, DB2 pureScale, MQ multi-ins
• OS HA technologies
– PowerHA (HACMP)
• Hardware HA technologies
– Disk redundancy (RAID, SDD, etc)
–– FlashCopy, Metro/Global mirror
– Server redundancy (CPU, power, etc)
– Network redundancy
• Key point is eliminating SPOF
• Redundancy
• RPO = 0!
4
6. BC Level 2 – Continuous Serviceability
• Usually based on workload take over
• Automatically take over
• A challenge for application affinity and sequence
• Decoupling of components – easier maintenance
• Old data may be lost – could combine with HA
• Maintenance• Maintenance
• Planed and unplanned downtime
• Rolling updates
• Coexistence
• Short RTO !
5
7. BC Level 3 – Continuous Availability Cross Sites
• Two or more sites, separated by unlimited distances, running
the same applications and having the same data to provide
cross-site workload balancing and Continuous Availability /
Disaster Recovery
• Customer data at geographically dispersed sites kept in sync
via synchronization
GDPS/PPRC GDPS/XRC or GDPS/GM Active/ActiveGDPS/PPRC GDPS/XRC or GDPS/GM Active/Active
Failover model Failover model Near CA model
Recovery time = 2 minutes Recovery time < 1 hour Recovery time < 1 minute
Distance < 20 KM Unlimited distance Unlimited distance
CD1SOURCE
CD1TABLE
CD1SOURCE
CD1TABLE
CD1SOURCE
CD1TABLE
CD1SOURCE
CD1TABLE
CD1SOURCECD1SOURCE
CD1TABLECD1CD1TABLE
CD1SOURCE
CD1TABLE
CD1SOURCE
CD1TABLE
CD1SOURCECD1SOURCE
CD1TABLECD1CD1TABLE
6
• Care about both
RPO & RTO!
8. Workload Balancing Through Data Replication
• Both sides run workload simultaneously, may with same or
different volumes. But both have the full picture of data!
• Replicate data from one platform to another
• Both sides may work equally, or have different focus, like below:
• Main server still do the existing critical work.
• Meanwhile, the offloaded server can run data analysis, query data, etc.
• New business requirements, but don’t want to touch the existing server!• New business requirements, but don’t want to touch the existing server!
• When purchase a new organization, may involve a different database
on a different platform. How to centralize the data?
Site A Site B
Synchronization
OLTP QLTP
Powerful
Critical production work
(DB updates/inserts)
Strict maintenance
process
Cautions: Nobody
wants it down
Less powerful
Less critical work
(DB queries)
Work can be delayed,
but may cost high CPU
(Data analysis, credit
card anti-fraud, etc)
New workloads
7
9. Agenda
• Concepts of Business Continuity
• Messaging Technologies for Business Continuity
• HA Technologies
• Continuous Serviceability Technologies
• Continuous Availability Cross Sites
• Cases Sharing• Cases Sharing
8
10. MQ Technologies
• HA Technologies
• QSG for MQ on zOS
• Failover Technologies
• Application HA
• Continuous Serviceability Technologies
• MQ Clustering
• Rolling Upgrade• Rolling Upgrade
• Continuous Availability Cross Sites
• Data Synchronization
• Synchronization Application Design
• How To Replicate Data
• Performance Consideration
9
11. HA - QSG for MQ on z/OS
Queue
manager
Private
queues
Queue
manager
Private
queues
Coupling facility failure
Queue
manager
Private
queues
Queue
manager
Private
queues
Nonpersistent
messages on
private queues
OK (kept)
Queue manager failure
10
Queue
manager
Private
queues
Shared
queues
Messages on
shared queues
OK (kept)
Nonpersistent
messages on
shared queues
lost (deleted)
Queue
manager
Private
queues
Shared
queues
Messages on
shared queues
OK (kept)
Nonpersistent
messages on
private queues
lost (deleted)
Persistent
messages on
shared queues
restored from log
12. HA - Failover Technologies
• Failover
• The automatic switching of availability of a service
• Data accessible on all servers
• Multi-instance queue manager
• Integrated into the WebSphere MQ product
• Faster failover than HA cluster
• Runtime performance of networked storage• Runtime performance of networked storage
• More susceptible to MQ and OS defects
• HA cluster
• Capable of handling a wider range of failures
• Failover historically slower, but some HA clusters are improving
• Some customers frustrated by unnecessary failovers
• Extra product purchase and skills required
13. HA - Application Availability
• Application environment
• Dependencies like a specific DB, broker, WAS?
• machine-specific or server-specific?
• Start/stop operations – sequence?
• Message loss
• Really need every message delivered?
• Application affinities• Application affinities
• MQ connectivity
12
QM1
MQ Client
Application
QM3
QM2
14. App 1App 1Client 1
Gateway
QMgr
QMgr
Site 1
Continuous Serviceability – MQ Cluster
• Workload Balancing
• Service Availability
• Location Transparency (of a kind)
Service 1
Client 1
Service 1
QMgr
Site 2
13
15. QMgr QMgr
Service Service
QMgr
QMgr
App 1App 1Client
New York
but separated by an ocean and 3500 miles
Global applications
Multi - Data Center using MQ Cluster
QMgr QMgr
Service Service
QMgr
QMgr
App 1App 1Client
London
• Prefer traffic to stay geographically local
• Except when you have to look further afield
• How do you do this with clusters that span geographies?…
14
16. QMgr
Service
QMgr
App 1App 1Client
New York
DEF QALIAS(AppQ)
TARGET(NYQ)
DEF QALIAS(NYQ)
TARGET(ReqQ)
CLUSTER(Global)
CLWLPRTY(9)
AppQ NYQ
ReqQ
A A
LonQ
A
DEF QALIAS(LonQ)
TARGET(ReqQ)
CLUSTER(Global)
CLWLPRTY(4)
Set this up – The one cluster solution
London
• Clients always open AppQ
• Local alias determines the preferred region
• Cluster workload priority is used to target geographically local cluster aliases
• Use of CLWLPRTY enables automatic failover
•CLWLRANK can be used for manual failover
Service
App 1App 1Client
QMgr
AppQ
A
QMgr
NYQ
ReqQ
A
LonQ
A
DEF QALIAS(AppQ)
TARGET(LonQ)
DEF QALIAS(LonQ)
TARGET(ReqQ)
CLUSTER(Global)
CLWLPRTY(9)
DEF QALIAS(NYQ)
TARGET(ReqQ)
CLUSTER(Global)
CLWLPRTY(4)
15
17. QMgr QMgr
Service Service
QMgr
QMgr
App 1App 1Client
New York
USA
QMgr
QMgr
Set this up - The two cluster solution
QMgr QMgr
Service Service
QMgr
QMgr
App 1App 1Client
London
EUROPE
QMgr
QMgr
• The service queue managers join both geographical clusters
•Each with separate cluster receivers for each cluster, at different cluster priorities. Queues are clustered in both clusters.
• The client queue managers are in their local cluster only.
16
18. Continuous Availability Cross Sites
• Data Synchronization is the key component in Active-Active
• Capture transaction change in real-time
• Publish the change in high performance with low latency
• Messaging based implementation is proven to be the simplest
way among kinds of methods of data transmission
• A high performance, reliable messaging product is needed for
the following requirements:
• Simplifies application development
• Ease of use
• Assured message delivery
• High Performance and Scalability
• Easy of Management
17
19. Active-Active Common Model based on Messaging
Workload Distributor
•Cross Site Workload Distribution
•Data synchronization
•Reply on high performance, reliable messaging transmission
•Flexible application design
•Automation & Management
Business
App
Business
Data
Sync
App
Messaging
Sync
App
Messaging
Business
App
Business
Data
Sites at a distance
18
20. How to replicate data?
• Capture transaction activities through DB2 logs – an independent tool
Log-based
Capture
WebSphere MQ
Source Target
Highly parallel
Apply
Q
Capture
Q
Apply
• Modify the existing applications – Send out transactional data with MQ
API
• At the end of existing logic, add MQPUT call to send the data. Program an
apply application at the target end.
• Flexible, can cross different platforms, even different database products. But
need a robust application.
• Option to choose within or without syncpoint. – Will the existing transaction
fail(roll back) if the send fails?
WebSphere MQ
Q-Replication
19
21. Performance Tuning Considerations
• Synchronize only the changed data, thus reduce the data
volume
• Introduce more parallelism
• Multiple synchronization channels for different type of workload
• More threads in sync application for parallel processing
• Multiple MQ channels to leverage single channel busy problem• Multiple MQ channels to leverage single channel busy problem
• Invest to use MQ new feature
• Bigger buffer pools above the bar
• Sequential pre-fetch
• Page set read/write performance enhancement
• Channel performance improvement
20
22. MQ Buffer pools read ahead enhancement
• Symptom : When the number of messages overruns the buffer
pool allocated for the queue, messages are spilled to disk and
must then be retrieved from disk.
• The read ahead enhancement enables message pre-fetch from
disk storage and improves MQGET performance.
• Available in PM PM63802/UK79853 in 2012 and PM81785/
UK91439 in 2013.
• Internal testing shows ~50% improvement with read ahead
enabled (msglen=6KB).
• Enable this feature if MQ buffer pool may overrun.
21
23. Agenda
• Concepts of Business Continuity
• Messaging Technologies for Business Continuity
• Cases sharing
• Case 1 (Active/Active with QREP tool )
• Case 2 (Active/Active with application)
• Case 3 (Workload offload )• Case 3 (Workload offload )
• Case 4 (Workload offload to multiple systems)
22
24. Beijing data center:
For disaster recovery
Requirements of a bank – Active/Active
• A commercial bank - data centers in Shanghai and Beijing
• Beijing: One existing data center for disaster recovery
• Shanghai: One existing data center for production, and one new data center for Active-
Active. 70 km between two data centers
• This bank plans to achieve Active-Active between two data centers in Shanghai for core banking
business.
rows/s MB/s
OLTP 45K-50K 45
Batch 140K 50
Month-End Batch 130K 70-80
1200 km
70 km
For disaster recovery
Shanghai data center 1
Production center
Shanghai data center 2
23
Month-End Batch 130K 70-80
Interest Accrual Batch 440K 172.5
25. MQ in Q Replication
• Part of the InfoSphere Data Replication product
• A software-based asynchronous replication solution
• For Relational Databases
• Changes are captured from the database recovery log; transmitted as
(compact) binary data; and then applied to the remote database(s) using SQL
statements.
• Leverages WebSphere MQ for Staging/Transport
• Each captured database transactions published in an MQ message (messages
sent at each commit interval)sent at each commit interval)
• Staging makes it possible to achieve continuous operation even when the target
database is down for some time or the network encounter some problem.
24
DB2
Control Tables
Site A
DB2
Control tables
Q Capture
Q Apply
agent
agent
agentUser
tables
database
recovery
log
User
tables
Unlimited
Distance
Site B
Configuration &
Monitoring
logrdr publish
Data CenterWebSphere MQ
DB2 Transaction
Parallel Replay
Asynchronous
LOG change
data capture
Active DB2Active DB2 Persistent
Staging
Capability
SQL
statements
26. MQ v8.0 features for Q Rep scenarios
• Sequential pre-fetch on z/OS
• The TUNE READAHEAD(ON), TUNE RAHGET(ON) delivered to
the bank as PTF in V71 and still applicable to V8
• Pageset read/write performance enhancements for QREP on z/OS
• Changes to the queue manager deferred write processor. Now it’s
the default behavious in the V8
• 64-bit enablement of buffer pools on z/OS
• More real storages can be used as buffers
• SMF Enhancements on z/OS
• Chinit SMF helps on tuning channel performance
• 64-bit log RBA
• We probably want QREP users to get to this
• Other improvement
• z/OS miscellaneous improvements (performance and serviceability)
• Channel performance on z/OS
25
27. Case 2(Active/Active with application)
• Active-Active Adaptability in Small/Medium-sized Banks
• China banks have setup storage based DR solution, but the
business recovery time is too long
• Sysplex solution is expensive, and input-output ratio is not high.
The distance is also limited.
• Need to consider application based solution, and mix with the
storage based solutionstorage based solution
• Active-Active is the target model of modern data center
• Not only for mainframe, but heterogeneous and
periphery distributed platform also need to be active-active
26
28. Business Requirement of Active-Active
• Credit card system on mainframe is based on the VisionPlus
(V+) solution by First Data.
• Improve the capacity and availability of the whole credit card
system.
• More comprehensive and more efficient services by payment
systems of the banks.systems of the banks.
• More flexibility accesses, more comprehensive functions of
liquidity risk management, extension of the scope of system
monitoring
• Refinement of backup infrastructure
27
29. The target Active-Active System Structure
• Both the main system and the secondary system are active
• Real data synchronization for OLTP transactions
• The main system and the secondary system backup each other
• Workload can be taken over in case of planned or unplanned failure
File TransferOLTP Batch Terminal Anti-fraud Reporting Debt-collection
2. File Transfer
(Secondary) V+ Mainframe
Batch Processing
(Main) V+ Mainframe
Batch Processing
DRNET
Headquarter
Gateway
Finance
Processing in BJ
Finance
Processing in SH
OLTP Processing
OLTP Processing
VISA/MC/JCB .
Non-Finance
Processing
3. Global Mirror
files
Workload Split by
Card BIN, and send to
BJ and SH
1.OLTP Transaction (MQ)
28
30. Active-Active Deployment Model
Continuous Availability
– Active-Active
Encryption
Core
Data
Beijing Business Continuous Availability
Achieve Business Continuous AvailabilityAchieve Business Continuous Availability
by front end and mainframe activeby front end and mainframe active--activeactive
Reliable Services
Synchronize application data based onSynchronize application data based on
Headquarter Gateway
(Route by BIN)
Encryption
Front-end
App System
Core
Data
Sync
Sync
Shanghai
Front-end App
System(Main)
Synchronize application data based onSynchronize application data based on
MQ reliable messaging, keep dataMQ reliable messaging, keep data
consistency in real timeconsistency in real time
Data Backup
Backup key business data through MQBackup key business data through MQ
seriesseries
Data interchange in real time
The data centers could be located in longThe data centers could be located in long
distancedistance
29
31. Active-Active Logical model for OLTP
• Self implemented replication service based on WebSphere MQ
for z/OS
Beijing Site Shanghai Site
Credit Card System
Credit Card System
Workload
Distributor
MQ queue manager 1
send
VSAM
AOR
Transaction
Publisher
VSAM
Transaction
Replay
retrieve
MQ queue manager 2
AOR
Transaction
Publisher
Transaction
Replay
retrieve
send
Credit Card System
30
32. Planned Site Switch Over Procedure
• Stop workload routing to BJ site
• Waiting for SH site duplex as BJ site data
• Workload re-rout to SH site
• Reverse GM from site B to site A
31
33. Unplanned Site Switch Over Procedure
• Stop workload routing to BJ site
• Workload re-rout to SH site
• Reverse GM from site B to site A
32
34. Characteristics of this case
• For business which has less complex master data with less
dependent database tables. For example, Credit Card business.
• The synchronization applications need to be developed
according to your business and technical requirements, rather
than an out-of-box product.than an out-of-box product.
33
35. Case 3 (Workload offload )
• Purpose
• A new business – SELECT frequently.
• Existing DB2 on zOS, but wants to buy an existing solution on Linux.
• So this is an active-active data replication within the same data center,
cross platform.
• Implementation
• Modify the existing core banking applications + Send with MQ logic at
the end.
• On the distributed side, develop another application for DB
updates/inserts.
• Minimize the impact on the existing applications - out of syncpoint.
34
36. Workload offload
• Easier and Faster expand the business
• The existing business is slight touched (nearly untouched).
• Flexible, no dependencies on the type of target database.
Workload
Distributor
35
Core banking system(zOS)
zOS System
Core
Workload
Standby
Linux System
Query
Workload
Active
QUERY system(Linux)
App Logic:
• Existing logic
• MQPUT (data to update in DB)
• EXEC CICS SYNCPOINT
Apply
Application
App Logic:
• According to the data received,
update target with SQL statement
or SP
MQ Channel
zOS MQ Linux MQ
37. Case 4 (Workload offload to multiple systems)
• Purpose
• Replicate zOS database of core credit card system to a Linux
database in a near real time window. There are multiple consumers on
different Linux boxes want the same data.
• Implementation
• zOS MQ dose a normal put(same as the data replication discussed in
previous pages), only one copy of data is transferred to Linux MQ.
Then this MQ dose the 1-n publication with the MQ pub/sub engine.Then this MQ dose the 1-n publication with the MQ pub/sub engine.
36
MQPUT(/credit/deposit/)
CICS/Batch
QM on zOS ((((QM1)))) QM distributed ((((QM2))))
SUB2.Q
APP1.SUB
SUB1.Q
MQGET or
Remote QMGR
APP2.SUB
Cluster XMITQ
Or XMITQ(hierarchy)
……
10 Subs in total
MQGET or
Remote QMGR
38. Detailed implementation on pub/sub + HA
PublisherPublisherPublisher
MQ cluster
QM0A QM0B
QMGW01 QMGW03 QMGW02
QM0A/QM0B:
DEFINE TOPIC(MYTOPIC) TOPICSTR('/Price/Bread')
DEFINE QALIAS(MYTARGET) TARGET(MYTOPIC)
TARGTYPE(TOPIC) CLUSTER(CL0)
Duplicated Apps(On gateway QMGRs):
Just put messages to queue 'MYTARGET', the cluster
will use work-load balancing logic to route them to
either QM0A or QM0B.
MQPUT
CLWLPRTY = 7CLWLPRTY = 5
37
Hierarchy
App
5 Subscription
App
5 Subscription
App
5 Subscription
App
5 Subscription
QM01 QM02 QM03 QM04
TARGTYPE(TOPIC) CLUSTER(CL0)
QM01/QM02/QM03/QM04:
ALTER QMGR PARENT(QM0A)
/* For QM03/QM04, the parent is QM0B */
DEFINE QL(MYTARGETQ1)
DEFINE QL(MYTARGETQ2)
DEFINE QL(MYTARGETQ3)
DEFINE QL(MYTARGETQ4)
DEFINE QL(MYTARGETQ5)
DEFINE SUB(SUB01) TOPICSTR('/Price/Bread')
DEST(MYTARGETQ1)
DEFINE SUB(SUB02) TOPICSTR('/Price/Bread')
DEST(MYTARGETQ2)
DEFINE SUB(SUB03) TOPICSTR('/Price/Bread')
DEST(MYTARGETQ3)
DEFINE SUB(SUB04) TOPICSTR('/Price/Bread')
DEST(MYTARGETQ4)
DEFINE SUB(SUB05) TOPICSTR('/Price/Bread')
DEST(MYTARGETQ5)
41. Notices and Disclaimers (con’t)
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products in connection with this
publication and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM
products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.
IBM does not warrant the quality of any third-party products, or the ability of any such third-party products to
interoperate with IBM’s products. IBM EXPRESSLY DISCLAIMS ALL WARRANTIES, EXPRESSED OR IMPLIED,
INCLUDING BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE.
The provision of the information contained herein is not intended to, and does not, grant any right or license under any
IBM patents, copyrights, trademarks or other intellectual property right.
• IBM, the IBM logo, ibm.com, Bluemix, Blueworks Live, CICS, Clearcase, DOORS®, Enterprise Document
Management System™, Global Business Services ®, Global Technology Services ®, Information on Demand,Management System™, Global Business Services ®, Global Technology Services ®, Information on Demand,
ILOG, Maximo®, MQIntegrator®, MQSeries®, Netcool®, OMEGAMON, OpenPower, PureAnalytics™,
PureApplication®, pureCluster™, PureCoverage®, PureData®, PureExperience®, PureFlex®, pureQuery®,
pureScale®, PureSystems®, QRadar®, Rational®, Rhapsody®, SoDA, SPSS, StoredIQ, Tivoli®, Trusteer®,
urban{code}®, Watson, WebSphere®, Worklight®, X-Force® and System z® Z/OS, are trademarks of
International Business Machines Corporation, registered in many jurisdictions worldwide. Other product and
service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on
the Web at "Copyright and trademark information" at: www.ibm.com/legal/copytrade.shtml.
42. Thank You
Your Feedback is
Important!
Access the InterConnect 2015Access the InterConnect 2015
Conference CONNECT Attendee
Portal to complete your session
surveys from your smartphone,
laptop or conference kiosk.