This document provides best practices for implementing high availability and disaster recovery solutions for Informix databases using HDR, RSS, SDS, and connection manager technologies. It discusses configuration parameters and strategies for minimizing data loss and downtime in the event of failures. Key recommendations include using unbuffered logging, tuning bufferpool and I/O settings, and coordinating transactions across nodes for applications.
Restoring Restoration's Reputation in Kafka Streams with Bruno Cadonna & Luca...HostedbyConfluent
"Restoring local state in Kafka Streams applications is indispensable for recovering after a failure or for moving stream processors between Kafka Streams clients. However, restoration has a reputation for being operationally problematic, because a Streams client occupied with restoration of some stream processors blocks other stream processors that are ready from processing new records. When the state is large this can have a considerable impact on the overall throughput of the Streams application. Additionally, when failures interrupt restoration, restoration restarts from the beginning, thus negatively impacting throughput further.
In this talk, we will explain how Kafka Streams currently restores local state and processes records. We will show how we decouple processing from restoring by moving restoration to a dedicated thread and how throughput profits from this decoupling. We will present how we avoid restarting restoration from the beginning after a failure. Finally, we will talk about the concurrency and performance problems that we had to overcome and we will present benchmarks that show the effects of our improvements."
China Telecom Americas: SD-WAN OverviewVlad Sinayuk
China Telecom Americas has the only fully licensed SD-WAN service to connect between mainland China, North America, Europe, Asia Pacific, and elsewhere in the world.
Informix Update New Features 11.70.xC1+IBM Sverige
En uppdatering kring Informix-produkterna, framför allt den allra senaste releasen av Informix-databasen samt framtida roadmap.
Denna presentation hölls på IBM Data Server Day den 22 maj i Stockholm av Rickard Linck, IT Specialist - Informix/Optim/DB2, Client Technical Specialist, IBM
My use case is to provide monitoring, and improving the overall search data quality, also to find the unusual patterns of user’s search behavior, and notifying the intent on-site back to the respective business stakeholders. To achieve the same, I explored various big data processing engines, which can process the huge data with complex business logic in real time. Eventually, I used Flink Stream processing. This talk will showcase how I used Flink to accomplish my goal.
During the OpenStack Tokyo Summit we provided an overview on how Workday started the production deployment with a very robust and efficient CI/CD process that it explained here.
Implementing Exactly-once Delivery and Escaping Kafka Rebalance Storms with Y...HostedbyConfluent
"Even though stream processing has come a long way in the last few years, ensuring exactly-once delivery remains a difficult problem to solve.
This becomes an even bigger challenge when your consumers are distributed applications, and their Kubernetes pods can be scaled-out, scaled-in or simply restarted at any given moment, causing Apache Kafka to go into a “rebalance storm”.
In this talk, we’ll walk you through how we implemented exactly-once delivery with Kafka by managing Kafka transactions the right way, and how we escaped endless rebalance storms when running hundreds of consumers on the same Kafka topic.
We will discuss the issues we faced building Akamai’s data ingestion infrastructure on Azure, processing malicious traffic at internet scale.
The session covers:
- Kafka delivery semantics
- Kafka transactional API
- Kafka “anti-rebalance” tips and tricks"
Restoring Restoration's Reputation in Kafka Streams with Bruno Cadonna & Luca...HostedbyConfluent
"Restoring local state in Kafka Streams applications is indispensable for recovering after a failure or for moving stream processors between Kafka Streams clients. However, restoration has a reputation for being operationally problematic, because a Streams client occupied with restoration of some stream processors blocks other stream processors that are ready from processing new records. When the state is large this can have a considerable impact on the overall throughput of the Streams application. Additionally, when failures interrupt restoration, restoration restarts from the beginning, thus negatively impacting throughput further.
In this talk, we will explain how Kafka Streams currently restores local state and processes records. We will show how we decouple processing from restoring by moving restoration to a dedicated thread and how throughput profits from this decoupling. We will present how we avoid restarting restoration from the beginning after a failure. Finally, we will talk about the concurrency and performance problems that we had to overcome and we will present benchmarks that show the effects of our improvements."
China Telecom Americas: SD-WAN OverviewVlad Sinayuk
China Telecom Americas has the only fully licensed SD-WAN service to connect between mainland China, North America, Europe, Asia Pacific, and elsewhere in the world.
Informix Update New Features 11.70.xC1+IBM Sverige
En uppdatering kring Informix-produkterna, framför allt den allra senaste releasen av Informix-databasen samt framtida roadmap.
Denna presentation hölls på IBM Data Server Day den 22 maj i Stockholm av Rickard Linck, IT Specialist - Informix/Optim/DB2, Client Technical Specialist, IBM
My use case is to provide monitoring, and improving the overall search data quality, also to find the unusual patterns of user’s search behavior, and notifying the intent on-site back to the respective business stakeholders. To achieve the same, I explored various big data processing engines, which can process the huge data with complex business logic in real time. Eventually, I used Flink Stream processing. This talk will showcase how I used Flink to accomplish my goal.
During the OpenStack Tokyo Summit we provided an overview on how Workday started the production deployment with a very robust and efficient CI/CD process that it explained here.
Implementing Exactly-once Delivery and Escaping Kafka Rebalance Storms with Y...HostedbyConfluent
"Even though stream processing has come a long way in the last few years, ensuring exactly-once delivery remains a difficult problem to solve.
This becomes an even bigger challenge when your consumers are distributed applications, and their Kubernetes pods can be scaled-out, scaled-in or simply restarted at any given moment, causing Apache Kafka to go into a “rebalance storm”.
In this talk, we’ll walk you through how we implemented exactly-once delivery with Kafka by managing Kafka transactions the right way, and how we escaped endless rebalance storms when running hundreds of consumers on the same Kafka topic.
We will discuss the issues we faced building Akamai’s data ingestion infrastructure on Azure, processing malicious traffic at internet scale.
The session covers:
- Kafka delivery semantics
- Kafka transactional API
- Kafka “anti-rebalance” tips and tricks"
How Orange Financial combat financial frauds over 50M transactions a day usin...StreamNative
You will learn how Orange Financial combat financial frauds over 50M transactions a day using Apache Pulsar. The presentation is shared at Strata Data Conference at New York, US, 2019/09.
Everything You Always Wanted to Know About Kafka’s Rebalance Protocol but Wer...confluent
Apache Kafka is a scalable streaming platform with built-in dynamic client scaling. The elastic scale-in/scale-out feature leverages Kafka’s “rebalance protocol” that was designed in the 0.9 release and improved ever since then. The original design aims for on-prem deployments of stateless clients. However, it does not always align with modern deployment tools like Kubernetes and stateful stream processing clients, like Kafka Streams. Those shortcoming lead to two mayor recent improvement proposals, namely static group membership and incremental rebalancing (which will hopefully be available in version 2.3). This talk provides a deep dive into the details of the rebalance protocol, starting from its original design in version 0.9 up to the latest improvements and future work. We discuss internal technical details, pros and cons of the existing approaches, and explain how you configure your client correctly for your use case. Additionally, we discuss configuration tradeoffs for stateless, stateful, on-prem, and containerized deployments.
Why My Streaming Job is Slow - Profiling and Optimizing Kafka Streams Apps (L...confluent
Kafka Streams performance monitoring and tuning is important for many reasons, including identifying bottlenecks, achieving greater throughput, and capacity planning. In this talk we’ll share the techniques we used to achieve greater performance and save on compute, storage, and cost. We’ll cover: Identifying design bottlenecks in by reviewing logs, metrics, and serdes. State store access patterns, design, and optimization Using profiling tools such as JMX, YourKit etc. Performance tuning of Kafka and Kafka Streams configuration and properties. JVM optimization for correct heap size and garbage collection strategies. Functional programming and imperative programming trade offs.
Apache Impala is a complex engine and requires a thorough technical understanding to utilize it fully. Without proper configuration or usage, Impala’s performance becomes unpredictable, and end-user experience suffers. However, for many users and administrators, the right configuration of Impala is still a mystery.
Drawing on work with some of the largest clusters in the world, Manish Maheshwari shares ingestion best practices to keep an Impala deployment scalable and details admission control configuration to provide a consistent experience to end users. Manish also takes a high-level look at Impala’s query profile, which is used as a first step in any performance troubleshooting, and discusses common mistakes users and BI tools make when interacting with Impala. Manish concludes by detailing an ideal setup to show all of this in practice.
Apache Pulsar: Why Unified Messaging and Streaming Is the Future - Pulsar Sum...StreamNative
Data insights and data-driven strategies create the competitive differentiators companies thrive off today. The need for unified messaging and streaming has never been more apparent.
Pulsar started with the goal of building a global, geo-replicated infrastructure to serve Yahoo!’s messaging needs. With the increased need to process both business events (such as payment request, billing request) and operational events (such as log data, click events, etc), the team at Yahoo! set out to build a true unified infrastructure platform to handle all in-motion data. That technology became Apache Pulsar.
In this talk, Matteo Merli and Sijie Guo will dive into the landscape of unified messaging and streaming, how Pulsar helps companies achieve this vision, and what the future of Pulsar will look like.
Kafka: Journey from Just Another Software to Being a Critical Part of PayPal ...confluent
PayPal currently processes tens of billions of signals per day from different sources in batch and streaming mode. The data processing platform is the one powering these different analytical needs and use cases, not just at PayPal but our adjacencies like Venmo, Hyperwallet and iZettle. End users of this platform demand access to data insights with as much flexibility as possible to explore it with low processing latency.
One such use case is where our Switchboard(data de-multiplexer) platform where we process approximately 20 billion events daily and provide data to different teams and platforms with-in PayPal and also to platform outside PayPal for more insights. When we started building this platform Kafka was just another asynchronous message processing platform for us but we have seen it evolving to a place where its adds value not just in terms of event processing but also for platform resiliency and scalability.
Takeaway for the audience: Most people work with and have knowledge about data. With this talk I want to present information which is relevant and meaningful to the audience. Information and examples which will make it easier for attendees to understand our complex system and hopefully have some practical takeaways to use Kafka for similar problems on their hand.
Flink Forward San Francisco 2022.
This talk will take you on the long journey of Apache Flink into the cloud-native era. It started all the way from where Hadoop and YARN were the standard way of deploying and operating data applications.
We're going to deep dive into the cloud-native set of principles and how they map to the Apache Flink internals and recent improvements. We'll cover fast checkpointing, fault tolerance, resource elasticity, minimal infrastructure dependencies, industry-standard tooling, ease of deployment and declarative APIs.
After this talk you'll get a broader understanding of the operational requirements for a modern streaming application and where the current limits are.
by
David Moravek
Highly Available Kafka Consumers and Kafka Streams on Kubernetes with Adrian ...HostedbyConfluent
"Getting started with a Kafka Consumer or Streams application is relatively straight forward, but having those clients be highly available and resilient in a real-world application where compute is becoming more ephemeral is another matter. Kubernetes is a popular place to deploy these workloads, making these challenges more accessible and prevalent than ever.
In this talk I will introduce a simple consumer implementation with a default configuration and discuss the KIPs and features that have been introduced over time to limit how the hostile world of cloud computing can impact your real-time consuming applications.
Once the Kafka Consumer configurations are under our belt, we can see how these same concepts are applied and augmented in Kafka Streams, and then cater for new concepts such as maintaining and restoring local state store data.
Throughout the talk I will show out-of-the box Kubernetes features and deployment configurations that harmonise with the Kafka clients and their configurations to achieve a highly available consumer or streams deployment.
If you have found your real-time streaming applications stopping the world through rebalancing, starting up slower than expected through a routine deployment, taking an age to restore state or over time found them to be less reliable as your platform engineers make your Kubernetes cluster more awesome, you will hopefully find something in this talk you could apply tomorrow."
This document explains the dual node VLT deployment strategies with its associated network reference architecture. Various VLT deployment topologies are also explained with emphasis on best practices and recommendations for some of the network scenarios. This document also covers the configuration and troubleshooting of VLT using relevant show commands and different outputs.
Integration and Interoperation of existing Nexus networks into an ACI Archite...Cisco Canada
Mike Herbert, Principal Engineer INSBU, at Cisco Connect Toronto focused on the integration and interoperation of existing nexus networks into an ACI architecture.
HAProxy TCP 모드에서 클라이언트의 Source IP를 내부 서버로 전달하는 방법을 알아봅니다.
* 중간에 오타가 있어서 수정본을 다시 업로드 하고자 했으나... SlideShare 측의 답변으로는 "Re-Upload 기능을 제거했다."라고 합니다. 부디 오타 등 부자연스러운 부분에 대해 너그럽게 이해를 부탁 드립니다.
In this presentation, we will cover the IAP-VPN architecture which includes the following two components: IAPs at branch sites and controller at the data center. Check out the webinar recording where this presentation was used: https://community.arubanetworks.com/t5/Wireless-Access/Technical-Webinar-Recording-Slides-Aruba-Instant-AP-VPN/m-p/300742
Register for the upcoming webinars: https://community.arubanetworks.com/t5/Training-Certification-Career/EMEA-Airheads-Webinars-Jul-Dec-2017/td-p/271908
How Pulsar Enables Netdata to Offer Unlimited Infrastructure Monitoring for F...StreamNative
The Netdata Agent is free, open source single-node monitoring software. Netdata Cloud is a free, closed source, software-as-a-service that brings together metadata from endpoints running the Netdata Agent, giving a complete view of the health and performance of an infrastructure. All the metrics remain on the Netdata Agent, making Netdata Cloud the focal point of a distributed, infinitely scalable, low cost solution.
The heart of Netdata Cloud is Pulsar. Almost every message coming from and going to the open source agents passes through Pulsar. Pulsar's infinite number of topics has given us the flexibility we needed and in some cases, every single Netdata Agent has its own unique Pulsar topic. A single message from an agent or from a service that processes a front end request can trigger several other Pulsar messages, as we also use Pulsar for communication between microservices (using a CQRS pattern with shared subscriptions for scalability).
The reliable persistence of messages has allowed us to replay old events to rebuild old and build new materialized views and debug specific production issues. It's also what will enable us to implement an event sourcing pattern, for a new set of features we want to introduce shortly.
We have had a few issues with a specific client and our shared subscriptions that we're working on resolving, but overall Pulsar has proven to be one of the most reliable parts of our infrastructure and we decided to proceed with a managed services agreement.
How Orange Financial combat financial frauds over 50M transactions a day usin...StreamNative
You will learn how Orange Financial combat financial frauds over 50M transactions a day using Apache Pulsar. The presentation is shared at Strata Data Conference at New York, US, 2019/09.
Everything You Always Wanted to Know About Kafka’s Rebalance Protocol but Wer...confluent
Apache Kafka is a scalable streaming platform with built-in dynamic client scaling. The elastic scale-in/scale-out feature leverages Kafka’s “rebalance protocol” that was designed in the 0.9 release and improved ever since then. The original design aims for on-prem deployments of stateless clients. However, it does not always align with modern deployment tools like Kubernetes and stateful stream processing clients, like Kafka Streams. Those shortcoming lead to two mayor recent improvement proposals, namely static group membership and incremental rebalancing (which will hopefully be available in version 2.3). This talk provides a deep dive into the details of the rebalance protocol, starting from its original design in version 0.9 up to the latest improvements and future work. We discuss internal technical details, pros and cons of the existing approaches, and explain how you configure your client correctly for your use case. Additionally, we discuss configuration tradeoffs for stateless, stateful, on-prem, and containerized deployments.
Why My Streaming Job is Slow - Profiling and Optimizing Kafka Streams Apps (L...confluent
Kafka Streams performance monitoring and tuning is important for many reasons, including identifying bottlenecks, achieving greater throughput, and capacity planning. In this talk we’ll share the techniques we used to achieve greater performance and save on compute, storage, and cost. We’ll cover: Identifying design bottlenecks in by reviewing logs, metrics, and serdes. State store access patterns, design, and optimization Using profiling tools such as JMX, YourKit etc. Performance tuning of Kafka and Kafka Streams configuration and properties. JVM optimization for correct heap size and garbage collection strategies. Functional programming and imperative programming trade offs.
Apache Impala is a complex engine and requires a thorough technical understanding to utilize it fully. Without proper configuration or usage, Impala’s performance becomes unpredictable, and end-user experience suffers. However, for many users and administrators, the right configuration of Impala is still a mystery.
Drawing on work with some of the largest clusters in the world, Manish Maheshwari shares ingestion best practices to keep an Impala deployment scalable and details admission control configuration to provide a consistent experience to end users. Manish also takes a high-level look at Impala’s query profile, which is used as a first step in any performance troubleshooting, and discusses common mistakes users and BI tools make when interacting with Impala. Manish concludes by detailing an ideal setup to show all of this in practice.
Apache Pulsar: Why Unified Messaging and Streaming Is the Future - Pulsar Sum...StreamNative
Data insights and data-driven strategies create the competitive differentiators companies thrive off today. The need for unified messaging and streaming has never been more apparent.
Pulsar started with the goal of building a global, geo-replicated infrastructure to serve Yahoo!’s messaging needs. With the increased need to process both business events (such as payment request, billing request) and operational events (such as log data, click events, etc), the team at Yahoo! set out to build a true unified infrastructure platform to handle all in-motion data. That technology became Apache Pulsar.
In this talk, Matteo Merli and Sijie Guo will dive into the landscape of unified messaging and streaming, how Pulsar helps companies achieve this vision, and what the future of Pulsar will look like.
Kafka: Journey from Just Another Software to Being a Critical Part of PayPal ...confluent
PayPal currently processes tens of billions of signals per day from different sources in batch and streaming mode. The data processing platform is the one powering these different analytical needs and use cases, not just at PayPal but our adjacencies like Venmo, Hyperwallet and iZettle. End users of this platform demand access to data insights with as much flexibility as possible to explore it with low processing latency.
One such use case is where our Switchboard(data de-multiplexer) platform where we process approximately 20 billion events daily and provide data to different teams and platforms with-in PayPal and also to platform outside PayPal for more insights. When we started building this platform Kafka was just another asynchronous message processing platform for us but we have seen it evolving to a place where its adds value not just in terms of event processing but also for platform resiliency and scalability.
Takeaway for the audience: Most people work with and have knowledge about data. With this talk I want to present information which is relevant and meaningful to the audience. Information and examples which will make it easier for attendees to understand our complex system and hopefully have some practical takeaways to use Kafka for similar problems on their hand.
Flink Forward San Francisco 2022.
This talk will take you on the long journey of Apache Flink into the cloud-native era. It started all the way from where Hadoop and YARN were the standard way of deploying and operating data applications.
We're going to deep dive into the cloud-native set of principles and how they map to the Apache Flink internals and recent improvements. We'll cover fast checkpointing, fault tolerance, resource elasticity, minimal infrastructure dependencies, industry-standard tooling, ease of deployment and declarative APIs.
After this talk you'll get a broader understanding of the operational requirements for a modern streaming application and where the current limits are.
by
David Moravek
Highly Available Kafka Consumers and Kafka Streams on Kubernetes with Adrian ...HostedbyConfluent
"Getting started with a Kafka Consumer or Streams application is relatively straight forward, but having those clients be highly available and resilient in a real-world application where compute is becoming more ephemeral is another matter. Kubernetes is a popular place to deploy these workloads, making these challenges more accessible and prevalent than ever.
In this talk I will introduce a simple consumer implementation with a default configuration and discuss the KIPs and features that have been introduced over time to limit how the hostile world of cloud computing can impact your real-time consuming applications.
Once the Kafka Consumer configurations are under our belt, we can see how these same concepts are applied and augmented in Kafka Streams, and then cater for new concepts such as maintaining and restoring local state store data.
Throughout the talk I will show out-of-the box Kubernetes features and deployment configurations that harmonise with the Kafka clients and their configurations to achieve a highly available consumer or streams deployment.
If you have found your real-time streaming applications stopping the world through rebalancing, starting up slower than expected through a routine deployment, taking an age to restore state or over time found them to be less reliable as your platform engineers make your Kubernetes cluster more awesome, you will hopefully find something in this talk you could apply tomorrow."
This document explains the dual node VLT deployment strategies with its associated network reference architecture. Various VLT deployment topologies are also explained with emphasis on best practices and recommendations for some of the network scenarios. This document also covers the configuration and troubleshooting of VLT using relevant show commands and different outputs.
Integration and Interoperation of existing Nexus networks into an ACI Archite...Cisco Canada
Mike Herbert, Principal Engineer INSBU, at Cisco Connect Toronto focused on the integration and interoperation of existing nexus networks into an ACI architecture.
HAProxy TCP 모드에서 클라이언트의 Source IP를 내부 서버로 전달하는 방법을 알아봅니다.
* 중간에 오타가 있어서 수정본을 다시 업로드 하고자 했으나... SlideShare 측의 답변으로는 "Re-Upload 기능을 제거했다."라고 합니다. 부디 오타 등 부자연스러운 부분에 대해 너그럽게 이해를 부탁 드립니다.
In this presentation, we will cover the IAP-VPN architecture which includes the following two components: IAPs at branch sites and controller at the data center. Check out the webinar recording where this presentation was used: https://community.arubanetworks.com/t5/Wireless-Access/Technical-Webinar-Recording-Slides-Aruba-Instant-AP-VPN/m-p/300742
Register for the upcoming webinars: https://community.arubanetworks.com/t5/Training-Certification-Career/EMEA-Airheads-Webinars-Jul-Dec-2017/td-p/271908
How Pulsar Enables Netdata to Offer Unlimited Infrastructure Monitoring for F...StreamNative
The Netdata Agent is free, open source single-node monitoring software. Netdata Cloud is a free, closed source, software-as-a-service that brings together metadata from endpoints running the Netdata Agent, giving a complete view of the health and performance of an infrastructure. All the metrics remain on the Netdata Agent, making Netdata Cloud the focal point of a distributed, infinitely scalable, low cost solution.
The heart of Netdata Cloud is Pulsar. Almost every message coming from and going to the open source agents passes through Pulsar. Pulsar's infinite number of topics has given us the flexibility we needed and in some cases, every single Netdata Agent has its own unique Pulsar topic. A single message from an agent or from a service that processes a front end request can trigger several other Pulsar messages, as we also use Pulsar for communication between microservices (using a CQRS pattern with shared subscriptions for scalability).
The reliable persistence of messages has allowed us to replay old events to rebuild old and build new materialized views and debug specific production issues. It's also what will enable us to implement an event sourcing pattern, for a new set of features we want to introduce shortly.
We have had a few issues with a specific client and our shared subscriptions that we're working on resolving, but overall Pulsar has proven to be one of the most reliable parts of our infrastructure and we decided to proceed with a managed services agreement.
Geographically Distributed Multi-Master MySQL ClustersContinuent
Global data access can greatly expand the reach of your business. Continuent's multi-site multi-master (MSMM) solutions enable applications to accept write traffic in multiple locations across on-premises and vCloud Air.
As an example, this includes the following real-world, business-critical use cases:
- Improve performance for globally distributed users registering hardware devices by permitting updates on the geographically closest site
- Ensure availability of credit card processing by spreading transaction processing across two or more sites. Users can still process credit card transactions if a single site is unavailable to them for any reason, including end-user Internet routing problems
- Enable business continuity by using multi-master updates on different hosting providers for service scalability, personalization and software upgrades of GPS devices.
Individual Continuent clusters already provide excellent single-site database availability and performance. In this webinar we review the benefits of combining multiple Continuent clusters into a global multi-site multi-master (MSMM) topology for:
- Optimizing your installation for MSMM
- Optimizing your application for MSMM
- Monitoring and administration
- Failover and recovery of individual servers or entire locations.
VMware End-User-Computing Best Practices PosterVMware Academy
The End-User-Computing Best Practices poster gives you up-to-date tips and guidelines for configuring and sizing the wide range of EUC products. Enlarge and print!
Zero Downtime Architectures based on JEE platform. Almost every big enterprise with online business tries to design its applications in a way that they are always online. But is it also the case when we upgrade the database cluster? When we switch the whole data center? Based on a customer project we try to present common architecture principles that enable you to do all this without any service interruption and the most important: without any stress.
Tungsten Connector / Proxy is truly the secret sauce for the Tungsten Clustering solution. Watch this webinar to learn how the Tungsten Connector enables zero-downtime MySQL maintenance via the manual switch operation, and gain an understanding of the various configuration options for doing local reads in remote composite clusters.
AGENDA
- Review the cluster architecture
- Understand the role of the Connector
- Describe Connector deployment best practices (app, dedicated with lb, db with lb)
- Explore zero-downtime MySQL maintenance using the manual role switch procedure
- Learn about Connector routing patterns inside a composite cluster
- Illustrate a manual site switch
- Explain read affinity and the vast performance improvement of local reads
- Examine Connector multi-cluster support
NGINX Plus R7 is full of new features to help you deliver your applications. HTTP/2 is now fully supported. A redesigned graphical dashboard helps you quickly identify problems. And improvements to the core of NGINX enhance performance, security, and reliability for all your applications. These changes bring tremendous capability to help make your applications faster and more secure than ever.
View full webinar on demand at https://www.nginx.com/resources/webinars/whats-new-in-nginx-plus-r7/
Slow things down to make them go faster [FOSDEM 2022]Jimmy Angelakos
Talk from FOSDEM 2022
It's easy to get misled into overconfidence based on the performance of powerful servers, given today's monster core counts and RAM sizes. However, the reality of high concurrency usage is often disappointing, with less throughput than one would expect. Because of its internals and its multi-process architecture, PostgreSQL is very particular about how it likes to deal with high concurrency and in some cases it can slow down to the point where it looks like it's not performing as it should. In this talk we'll take a look at potential pitfalls when you throw a lot of work at your database. Specifically, very high concurrency and resource contention can cause problems with lock waits in Postgres. Very high transaction rates can also cause problems of a different nature. Finally, we will be looking at ways to mitigate these by examining our queries and connection parameters, leveraging connection pooling and replication, or adapting the workload.
Topics:
1. Understand what we mean by high concurrency.
2. Understand ACID & MVCC in Postgres.
3. Understand how high concurrency affects Postgres performance.
4. Understand how locks/latches affect Postgres performance.
5. Understand how high transaction rates can affect Postgres.
6. Mitigation strategies for high concurrency scenarios.
IBM IMPACT 2014 - AMC-1882 Building a Scalable & Continuously Available IBM M...Peter Broadhurst
An introduction to one possible MQ architecture - an active/active multiple queue manager client<->server environment.
Summary of detailed topology articles available here:
http://ow.ly/vrUUV
And MQDev blog+discussion on client attachment here:
http://ibm.co/MM8rMl
A presentation on how applying Cloud Architecture Patterns using Docker Swarm as orchestrator is possible to create reliable, resilient and scalable FIWARE platforms.
GE IOT Predix Time Series & Data Ingestion Service using Apache Apex (Hadoop)Apache Apex
This presentation will introduce usage of Apache Apex for Time Series & Data Ingestion Service by General Electric Internet of things Predix platform. Apache Apex is a native Hadoop data in motion platform that is being used by customers for both streaming as well as batch processing. Common use cases include ingestion into Hadoop, streaming analytics, ETL, database off-loads, alerts and monitoring, machine model scoring, etc.
Abstract: Predix is an General Electric platform for Internet of Things. It helps users develop applications that connect industrial machines with people through data and analytics for better business outcomes. Predix offers a catalog of services that provide core capabilities required by industrial internet applications. We will deep dive into Predix Time Series and Data Ingestion services leveraging fast, scalable, highly performant, and fault tolerant capabilities of Apache Apex.
Speakers:
- Venkatesh Sivasubramanian, Sr Staff Software Engineer, GE Predix & Committer of Apache Apex
- Pramod Immaneni, PPMC member of Apache Apex, and DataTorrent Architect
Lookout on Scaling Security to 100 Million DevicesScyllaDB
The massive increase of security-related data requires companies to respond with new approaches to ingestion. Learn how Lookout has changed its approach for ingesting telemetry to meet their goal of growing from 1.5 million devices to 100 million devices and beyond, using Kafka Connect and switching from AWS DynamoDB to Scylla.
Do you get too many visitors on the website, getting maximum hits on your site may crash your site, your site may get stuck or it may go through a downtime? How to avoid such instances?
In Red Hat Enterprise Linux 7 a new method of interacting with netfilter has been introduced: firewalld.
firewalld is a system daemon that:
Can configure and monitor the system firewall rules
Applications can talk to firewalld to request ports to be opened using the Dbus messaging system
Both covers IPv4, IPv6, and potentially ebtables settings is installed from the firewalld package. This package is part of a base install , but not part of a minimal install
Simplifies firewall management by classifying all network traffic into zones.
Similar to Always on high availability best practices for informix (20)
Choosing the right platform for your Internet -of-Things solutionIBM_Info_Management
Deploying a solution within the context of the Internet of Things (IoT) typically requires involves many considerations, ranging from the hardware involved to the architecture of the whole environment, and from the decisions about where processing and analytics is to take place to the software choices that allow you to exploit the Internet of Things. This presentation will focus on the need to support a homogeneous processing environment. That is, it will be preferable if processing in all tiers of the IoT is consistent and compatible. This joint presentation will go on to discuss the implications of this consistency for database selection.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
2. • IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal
without notice at IBM’s sole discretion.
• Information regarding potential future products is intended to outline our general product direction
and it should not be relied on in making a purchasing decision.
• The information mentioned regarding potential future products is not a commitment, promise, or
legal obligation to deliver any material, code or functionality. Information about potential future
products may not be incorporated into any contract.
• The development, release, and timing of any future features or functionality described for our
products remains at our sole discretion.
Performance is based on measurements and projections using standard IBM benchmarks in a
controlled environment. The actual throughput or performance that any user will experience will vary
depending upon many factors, including considerations such as the amount of multiprogramming in the
user’s job stream, the I/O configuration, the storage configuration, and the workload processed.
Therefore, no assurance can be given that an individual user will achieve results similar to those stated
here.
Please Note:
2
3. Industry Terms
• Recovery Point Objective (RPO)
§ How much data are you willing to lose?
• Recovery Time Objective (RTO)
§ How much time to recovery from a failure
• Example
§ ONCONFIG parameter RTO_SERVER_RESTART
Monitors transaction activity and coordinates checkpoints such
that in the event of a server crash, the server can reboot in the
time specified by RTO_SERVER_RESTART
2
4. Hot Standby
• Fred wants to implement an RTO policy of 15 seconds in the
event of a failure.
3
Primary Secondary
5. Updatable Secondary
• Fred wants to extend his HDR solution to utilize the secondary.
4
Primary Secondary
6. Updatable Secondary
• How do updates on the secondary work?
§ Row locks are acquired on secondary as updates are applied
from primary
§ Initial read is done on secondary
§ Update is forwarded to primary
• If row versioning is defined in the schema for the table, the version is
compared to determine if update can be applied
• Otherwise, whole row is compared to determine if update can be
applied
• What isolation levels are supported on a secondary?
§ Dirty Read
§ Committed Read
§ Committed Read Last Committed
5
http://www-01.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.admin.doc/ids_admin_0874.htm%23ids_admin_0874?lang=en
http://www-01.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.admin.doc/ids_admin_0875.htm?lang=en
http://www-01.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.admin.doc/ids_admin_0877.htm?lang=en
7. Application Perspective - Locking & Queries
6
Begin
Work
Read
Row (V1)
Update
Row (V2)
Apply
Update Sec
Commit
Work
Apply
Commit
Sec
ulock xlock release
lock Pri
Query
Primary
row(V1) DR=row(V2)
CRLC=row(V1)
CR,CS,RR=block
Query
Secondary
row(V1) DR=row(V2)
CRLC=row(V1)
CR=block
Anatomy
Of Update
release
lock Sec
xlock
Sec
8. Application Perspective - Locking & Updates
7
Begin
Work
Read
Row (V1)
Update
Row (V2)
Apply
Update
Commit
Work
Apply
Commit
ulock xlock release
lock
Update
Primary
row(V1) block
Update
Secondary
If hot row, push to primary
Otherwise, row(V1)
block
Anatomy
Of Update
xlock
Sec
release
lock Sec
9. Application behavior
• I’m on an updatable secondary and my application just did an
update to a row but its not committed yet. If I go read the row,
what version of the row will I see?
§ When my session (or any other session) attempts to read a row
that is recently updated, it will wait for secondary server I’m
connected to to replay that row update prior to reading the row.
8
Begin
Work
Read
Row (V1)
Forward
Update Sec
Wait Apply
Update Sec
Commit
Work
Apply
Commit
Sec
Read Row
Again
Block until row is applied
10. Application behavior
• When I get error 7350 “Attempt to update a stale version of a
row”, what happened?
§ My application read a row from the secondary node and between
the time the row was read and forwarded to the primary to be
updated, another transaction was able to complete an update to
the row.
9
Update
Secondary
Update
Primary
Read row
(V1)
Read row
(V1)
Update row
(V2)
Commit row
(V2)
Forward
update
(V3)
At this point, the forwarded
update is the wrong version
to what is committed; error
is returned.
11. What’s new?
• My application is using UPDATABLE_SECONDARY
configuration to perform queries and updates on all the
members of my HDR cluster. How do I coordinate transactions
across the HDR cluster?
• CLUSTER_TXN_SCOPE
ONCONFIG and session parameter used to control when the
application receives an acknowledgement of the commit of a
user’s transaction.
10
CLUSTER_TXN_SCOPE Connected to Primary Connected to Secondary
SESSION ACK when commit is
complete
ACK when commit is
complete on primary
SERVER (default) ACK when commit is
complete
ACK when commit is
complete on primary and
processed on the node I’m
connected to
CLUSTER ACK when commit has
been applied to all nodes
ACK when commit has
been applied to all nodes
12. What’s new?
• DRINTERVAL & HDR_TXN_SCOPE
These parameters work together to determine synchronization
between primary and secondary nodes
• FULL_SYNC is new
11
DRINTERVAL HDR_TXN_SCOPE Buffered logging Unbuffered logging
-1 n/a Async Near sync
0 FULL_SYNC Full sync Full sync
0 ASYNC Async Async
0 NEAR_SYNC Near sync Near sync
>0 n/a Async Async
13. DRINTERVAL & HDR_TXN_SCOPE
• My RPO is 0 for single point of failure
DRINTERVAL=0
HDR_TXN_SCOPE=NEAR_SYNC
This setting makes sure that committed transactions are received by the
secondary. If the primary fails, all committed transactions will be
guaranteed to be at least in volatile memory on the secondary.
• My RPO is 10 for a single point of failure
DRINTERVAL=10
Make sure I send to the secondary a buffer at least every 10s
• My RPO is 0 for multiple points of failure
DRINTERVAL=0
HDR_TXN_SCOPE=FULL_SYNC
This setting makes sure that committed transactions are received and
written to disk by the secondary. If the primary fails, all committed
transactions will be guaranteed to be hardened to disk on the secondary.
12
14. Offsite disaster
• Fred wants to extend his HDR solution to include offsite
replication in case of site disaster.
13
Primary Secondary
RSS Secondary
15. Remote Standalone Secondary (RSS)
• You want our remote site located in TimBuktu?
How’s the network connectivity to that site?
• You dropped what database?
§ DELAY_APPY
• Your planning to do what maintenance
this weekend?
§ Stop Apply command
• RSS Limitations
§ Can only be promoted to HDR secondary, not primary
§ SYNC mode not supported
14
16. Improved Network performance
• SMX_NUMPIPES
§ There is a limit on how many TCP buffers can be inflight across a
wire between a pair of ports until a TCP ACK is sent to the
sender. This is referred to as the TCP window. SMX can be
configured to have multiple pairs of ports between two given
servers, in effect filling in the gaps that would otherwise occur on
the network wire. This is especially advantageous if the network
connection is over a WAN or of less that best quality. In such
conditions, setting SMX_NUMPIPES to 2 can result in twice as
much data being sent across the wire.
§ SMX will reorganize the transmissions on the target node so that
it appears to have been received across a single serial
connection.
15
18. What’s really cool?
17
Hey Scott, we are having an online sale this weekend and we expect a
huge influx of internet activity on our web site. I might have forgot to tell
you that. Can our infrastructure handle that?
• Share Disk Secondary (SDS)
§ Adjust capacity as demand
changes
§ Does not duplicate disk space
§ No special hardware
• Cluster mgr or SDS_LOGCHECK
§ Coexist with ER, HDR & RSS
§ Primary can failover to any SDS
• ifxclone
§ Make a quick copy
19. What’s improved?
• Index page logging (IPL)
§ Copies a newly created index from primary to secondary using
the logical log.
§ Required for RSS secondary servers
§ Big performance boost (4x)
18
20. Best Practices for HDR, RSS, SDS
• All nodes which are candidates for failover (HDR secondary &
SDS) should have similar specs in case there is a failover
• Use unbuffered database logging to minimize lost transactions
• ONCONFIG parameter OFF_RECVRY_TRHEADS should be
set to prime (# of cpus) * 3
• Turn on AUTO_READAHEAD on secondary
• Larger BUFFERPOOL can alleviate some random I/O
• ONCONFIG parameter TEMPTAB_NOLOG=1 to default temp
tables to non logging
• ONCONFIG parameter HA_ALIAS= TCP network-based server
alias
§ Used to tell server network interface port to do server to server
replication traffic.
19
21. Best practices for HDR
• ONCONFIG parameter DRINTERVAL=0 and use
HDR_TXN_SCOPE (ASYNC, NEAR_SYNC or FULL_SYNC)
• ONCONFIG parameter DRAUTO=3 and use connection
manage to arbitrate failover
• ONCONFIG parameter LOG_STAGING_DIR always set
§ Some log records, like CHECKPOINT, require serialized
processing which can block the primary from sending log data.
When an HDR secondary is configured with a log staging
directory, the logs can be spooled to disk while the serialized log
record is applied on the secondary. Once the log record has
been applied, the secondary will apply the spooled log until it
catches up with the primary. This can alleviate backflow pressure
from the secondary to the primary.
20
22. Best practices for RSS
• ONCONFIG parameter RSS_FLOW_CONTROL
§ This ONCONFIG parameter controls RPO (units=amount of data
rather than time) for the RSS node so it doesn’t fall too far behind
• ONCONFIG parameter SMX_NUMPIPES
§ Take advantage of parallel data transmission using multiple
network pipes
21
23. Best practices for SDS
• ONCONFIG parameter SDS_LOGCHECK
User scenario…
I’m using HDR SDS with no cluster manager. How do I avoid disk
corruption and split brain in a failover scenario?
§ SDS_LOGCHECK is used to watch to log space in the event of a
failover scenario. After waiting N seconds, if no log activity is
seen, SDS secondary will assume takeover.
§ 10 is a good starting value
• ONCONFIG parameter SDS_FLOW_CONTROL
§ This ONCONFIG parameter controls RTO (units=amount of data
rather than time) for the SDS node so it doesn’t fall too far behind
• No data will be lost because the disks are shared!
• By not falling too far behind, it maintains RTO in the event of a
failover so there isn’t too much log to apply in order to catch up
22
31. Network paths offer perspective
PRI
HDR
switch
Is PRI down? Yes
PRI
HDR
Is PRI down? No
vs
32. We Value Your Feedback!
Don’t forget to submit your Insight session and speaker
feedback! Your feedback is very important to us – we use it
to continually improve the conference.
Access your surveys at insight2015survey.com to quickly
submit your surveys from your smartphone, laptop or
conference kiosk.
31
34. 33
Notices and Disclaimers (con’t)
Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly
available sources. IBM has not tested those products in connection with this publication and cannot confirm the accuracy of performance,
compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products. IBM does not warrant the quality of any third-party products, or the ability of any such third-party products to
interoperate with IBM’s products. IBM EXPRESSLY DISCLAIMS ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents, copyrights,
trademarks or other intellectual property right.
• IBM, the IBM logo, ibm.com, Aspera®, Bluemix, Blueworks Live, CICS, Clearcase, Cognos®, DB2® , DOORS®, Emptoris®, Enterprise
Document Management System™, FASP®, FileNet®, Global Business Services ®, Global Technology Services ®, IBM ExperienceOne™,
IBM SmartCloud®, IBM Social Business®, IMS™, Information on Demand, ILOG, Maximo®, MQIntegrator®, MQSeries®, Netcool®,
OMEGAMON, OpenPower, PureAnalytics™, PureApplication®, pureCluster™, PureCoverage®, PureData®, PureExperience®, PureFlex®,
pureQuery®, pureScale®, PureSystems®, QRadar®, Rational®, Rhapsody®, Smarter Commerce®, SoDA, SPSS, Sterling Commerce®,
StoredIQ, Tealeaf®, Tivoli®, Trusteer®, Unica®, urban{code}®, Watson, WebSphere®, Worklight®, X-Force® and System z® Z/OS, are
trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. Other product and service names
might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark
information" at: www.ibm.com/legal/copytrade.shtml.