CDC is a brand new approach that "turns the database inside out": it allows to get events out of the database state. This can be leveraged to get a cache that is never stale.
London In-Memory Computing Meetup - A Change-Data-Capture use-case: designing...Nicolas Fränkel
When one’s app is challenged with poor performances, it’s easy to set up a cache in front of one’s SQL database. It doesn’t fix the root cause (e.g. bad schema design, bad SQL query, etc.) but it gets the job done. If the app is the only component that writes to the underlying database, it’s a no-brainer to update the cache accordingly, so the cache is always up-to-date with the data in the database.
Things start to go sour when the app is not the only component writing to the DB. Among other sources of writes, there are batches, other apps (shared databases exist, unfortunately), etc. One might think about a couple of ways to keep data in sync i.e. polling the DB every now and then, DB triggers, etc. Unfortunately, they all have issues that make them unreliable and/or fragile.
In this talk, I will describe an easy-to-setup architecture that leverages CDC to have an evergreen cache.
jLove - A Change-Data-Capture use-case: designing an evergreen cacheNicolas Fränkel
When one’s app is challenged with poor performances, it’s easy to set up a cache in front of one’s SQL database. It doesn’t fix the root cause (e.g. bad schema design, bad SQL query, etc.) but it gets the job done. If the app is the only component that writes to the underlying database, it’s a no-brainer to update the cache accordingly, so the cache is always up-to-date with the data in the database.
Things start to go sour when the app is not the only component writing to the DB. Among other sources of writes, there are batches, other apps (shared databases exist unfortunately), etc. One might think about a couple of ways to keep data in sync i.e. polling the DB every now and then, DB triggers, etc. Unfortunately, they all have issues that make them unreliable and/or fragile.
You might have read about Change-Data-Capture before. It’s been described by Martin Kleppmann as turning the database inside out: it means the DB can send change events (SELECT, DELETE and UPDATE) that one can register to. Just opposite to Event Sourcing that aggregates events to produce state, CDC is about getting events out of states. Once CDC is implemented, one can subscribe to its events and update the cache accordingly. However, CDC is quite in its early stage, and implementations are quite specific.
In this talk, I’ll describe an easy-to-setup architecture that leverages CDC to have an evergreen cache.
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar AhmedEqunix Business Solutions
This document discusses tuning Linux and PostgreSQL for performance. It recommends:
- Tuning Linux kernel parameters like huge pages, swappiness, and overcommit memory. Huge pages can improve TLB performance.
- Tuning PostgreSQL parameters like shared_buffers, work_mem, and checkpoint_timeout. Shared_buffers stores the most frequently accessed data.
- Other tips include choosing proper hardware, OS, and database based on workload. Tuning queries and applications can also boost performance.
Hadoop at Bloomberg:Medium data for the financial industryMatthew Hunt
Overview of big data systems origins, strengths, weaknesses, and uses at Bloomberg for solving "medium data" timeseries issues. Medium data requires modest clusters with consistently low latency and high availability, and Bloomberg has driven changes to core Hadoop components to address.
NetApp Cluster-Failover Giveback allows a failed NetApp filer head to recover by having the operational head give back its resources. When a head fails, the other head takes over its disks and network connections in a takeover state. To recover the failed head, the operational head must be instructed to perform a giveback command, which will sync up the peers and allow the failed head to resume normal operations. An HA pair consists of two matching storage controllers connected to each other's disk shelves to provide fault tolerance for nondisruptive maintenance.
Become a MySQL DBA - slides: Deciding on a relevant backup solutionSeveralnines
This document discusses backup solutions for MySQL databases. It begins by defining logical and physical backups. For logical backups, it recommends mysqldump and mydumper/myloader. For physical backups, it recommends xtrabackup and snapshots. It provides details on using these tools and best practices like regular testing of backups. It gives examples of setups for on-premises, Amazon Web Services, and using Cluster Control for managing backups.
London In-Memory Computing Meetup - A Change-Data-Capture use-case: designing...Nicolas Fränkel
When one’s app is challenged with poor performances, it’s easy to set up a cache in front of one’s SQL database. It doesn’t fix the root cause (e.g. bad schema design, bad SQL query, etc.) but it gets the job done. If the app is the only component that writes to the underlying database, it’s a no-brainer to update the cache accordingly, so the cache is always up-to-date with the data in the database.
Things start to go sour when the app is not the only component writing to the DB. Among other sources of writes, there are batches, other apps (shared databases exist, unfortunately), etc. One might think about a couple of ways to keep data in sync i.e. polling the DB every now and then, DB triggers, etc. Unfortunately, they all have issues that make them unreliable and/or fragile.
In this talk, I will describe an easy-to-setup architecture that leverages CDC to have an evergreen cache.
jLove - A Change-Data-Capture use-case: designing an evergreen cacheNicolas Fränkel
When one’s app is challenged with poor performances, it’s easy to set up a cache in front of one’s SQL database. It doesn’t fix the root cause (e.g. bad schema design, bad SQL query, etc.) but it gets the job done. If the app is the only component that writes to the underlying database, it’s a no-brainer to update the cache accordingly, so the cache is always up-to-date with the data in the database.
Things start to go sour when the app is not the only component writing to the DB. Among other sources of writes, there are batches, other apps (shared databases exist unfortunately), etc. One might think about a couple of ways to keep data in sync i.e. polling the DB every now and then, DB triggers, etc. Unfortunately, they all have issues that make them unreliable and/or fragile.
You might have read about Change-Data-Capture before. It’s been described by Martin Kleppmann as turning the database inside out: it means the DB can send change events (SELECT, DELETE and UPDATE) that one can register to. Just opposite to Event Sourcing that aggregates events to produce state, CDC is about getting events out of states. Once CDC is implemented, one can subscribe to its events and update the cache accordingly. However, CDC is quite in its early stage, and implementations are quite specific.
In this talk, I’ll describe an easy-to-setup architecture that leverages CDC to have an evergreen cache.
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar AhmedEqunix Business Solutions
This document discusses tuning Linux and PostgreSQL for performance. It recommends:
- Tuning Linux kernel parameters like huge pages, swappiness, and overcommit memory. Huge pages can improve TLB performance.
- Tuning PostgreSQL parameters like shared_buffers, work_mem, and checkpoint_timeout. Shared_buffers stores the most frequently accessed data.
- Other tips include choosing proper hardware, OS, and database based on workload. Tuning queries and applications can also boost performance.
Hadoop at Bloomberg:Medium data for the financial industryMatthew Hunt
Overview of big data systems origins, strengths, weaknesses, and uses at Bloomberg for solving "medium data" timeseries issues. Medium data requires modest clusters with consistently low latency and high availability, and Bloomberg has driven changes to core Hadoop components to address.
NetApp Cluster-Failover Giveback allows a failed NetApp filer head to recover by having the operational head give back its resources. When a head fails, the other head takes over its disks and network connections in a takeover state. To recover the failed head, the operational head must be instructed to perform a giveback command, which will sync up the peers and allow the failed head to resume normal operations. An HA pair consists of two matching storage controllers connected to each other's disk shelves to provide fault tolerance for nondisruptive maintenance.
Become a MySQL DBA - slides: Deciding on a relevant backup solutionSeveralnines
This document discusses backup solutions for MySQL databases. It begins by defining logical and physical backups. For logical backups, it recommends mysqldump and mydumper/myloader. For physical backups, it recommends xtrabackup and snapshots. It provides details on using these tools and best practices like regular testing of backups. It gives examples of setups for on-premises, Amazon Web Services, and using Cluster Control for managing backups.
Five more ways to potentially break a Ceph cluster are described:
1. Under- or over-estimating the capabilities of automation tools when deploying or updating Ceph can cause issues like removing all monitors.
2. Running Ceph with a minimum size of 1 reduces durability since there is no redundancy if a single object store fails.
3. Not fully completing an update, such as enabling msgr v2 without updating OSDs, can leave the cluster in an inconsistent state.
4. Configuring multiple RADOS gateways with the same ID behind a load balancer causes performance issues as the active gateways change.
5. Blindly trusting the placement group autoscaler without considering the hardware, such
KSCOPE 2013: Exadata Consolidation Success StoryKristofferson A
This document summarizes an Exadata consolidation success story. It describes how three Exadata clusters were consolidated to host 60 databases total. Tools and methodology used included gathering utilization metrics, creating a provisioning plan, implementing the plan, and auditing. The document describes some "war stories" including resolving a slow HR time entry system through SQL profiling, addressing a memory exhaustion issue from an OBIEE report, and using I/O resource management to prioritize critical processes when storage cells became saturated.
The document provides an overview of the Aerospike architecture, including the client, cluster, storage, primary and secondary indexes, RAM, flash storage, and cross datacenter replication (XDR). The Aerospike architecture aims to handle extremely high read/write rates over persistent data at low latency while ensuring consistency and scalability across datacenters with no downtime.
Ceph Day Netherlands - Ceph Management and Monitoring with openATTIC 3.x Ceph Community
openATTIC is an open source Ceph management and monitoring GUI tool that aims to be useful for administrators and scale well. Version 3.x features a major code refactoring with a Ceph-only focus, statelessness, simplified installation, and replaced Nagios with Prometheus and Grafana for monitoring. It adds management of Ceph Object Gateway, iSCSI targets, and NFS shares, supports new Luminous features, and improves pool and RBD management. DeepSea provides automatic Ceph cluster deployment, configuration, and lifecycle management using SaltStack.
C-mode refers to an OS mode in ONTAP 8.0 and beyond where two or more controllers operate as a single shared resource pool. In c-mode, volumes can span multiple storage systems and clients see a single global namespace rather than individual shares. Key features of c-mode include non-disruptive operations where volumes can be moved between nodes and aggregates transparently, with no change to the client view of the filesystem. The major commands for storage failover in c-mode are 'storage failover giveback' to return ownership and 'storage failover show' to check the status of aggregates.
RADOS improvements and roadmap - Greg Farnum, Josh Durgin, Kefu ChaiCeph Community
Cephalocon APAC 2018
March 22-23, 2018 - Beijing, China
Greg Farnum,Red Hat RADOS Core Developer
Josh Durgin, Red Hat RADOS Lead
Kefu Chai, Red Hat Senior Software Engineer
This document provides an overview of CPU performance monitoring and capacity planning. It discusses how to compare CPU speeds using benchmarks like TPC-C and SPECint_rate2006. It also explains the difference between cores and threads, and reviews different CPU events that can be monitored like CPU utilization, wait time, and scheduling. The document demonstrates how to analyze CPU performance when migrating platforms and consolidating environments.
What every data programmer needs to know about disksiammutex
Disk I/O is much slower than memory access. When reading from a file, the OS first checks the page cache in memory, which has nanosecond access times, before needing to physically access the disk with millisecond seek times. When writing to a file, pages are marked as dirty in the cache before being written to disk asynchronously by flush threads. Fsync forces an immediate write to disk, but caches can lie and data may not be safely written. Virtualized environments add additional layers of caching that can affect performance unpredictably. Hardware techniques like SSDs, RAID controllers, and dedicated flash cards can improve I/O speeds.
The document discusses CPU performance monitoring and capacity planning. It begins with an overview of different CPU events that can be monitored like CPU usage, CPU wait, and CPU scheduler. It then discusses various tools that can be used to monitor CPU usage on the operating system and database level. Finally, it covers methods to compare CPU speeds between different systems, including using published benchmarks and doing actual benchmarking with tools like cputoolkit.
Basic concepts and high level configuration. This is a basic overview of the Aerospike database and presents an introduction to configuring the database service.
Find the full webinar with audio here - http://www.aerospike.com/webinars
PGConf.ASIA 2019 Bali - Setup a High-Availability and Load Balancing PostgreS...Equnix Business Solutions
PGConf.ASIA 2019 Bali - 10 September 2019
Speaker: Bo Peng
Room: SQL
Title: Setup a High-Availability and Load Balancing PostgreSQL Cluster - New Features of Pgpool-II 4.1
FeedBurner is a feed management provider that has scaled to serve 270,000 feeds and 100 million hits per day. The document discusses FeedBurner's scaling history and the solutions they implemented to address various scalability problems, including using load balancers and health checks, caching with Memcached, batch processing, partitioning tables, and implementing a disaster recovery plan using a "Panic App" to download and synchronize feeds.
This document discusses Redis persistence options including RDB snapshots, AOF logging, and how to configure each. RDB takes point-in-time snapshots of the dataset at intervals while AOF logs every write operation. Both can be combined or disabled. RDB is best for disaster recovery due to its single compact file but AOF provides more durability with configurable fsync policies and logs all operations in an easy to parse format. Configuration involves changing settings in the redis.conf file.
Building tungsten-clusters-with-postgre sql-hot-standby-and-streaming-replica...Command Prompt., Inc
Alex Alexander & Linas Virbalas
Hot standby and streaming replication will move the needle forward for high availability and scaling for a wide number of applications. Tungsten already supports clustering using warm standby. In this talk we will describe how to build clusters using the new PostgreSQL features and give our report from the trenches.
This talk will cover how hot standby and streaming replication work from a user perspective, then dive into a description of how to use them, taking Tungsten as an example. We'll cover the following issues:
* Configuration of warm standby and streaming replication
* Provisioning new standby instances
* Strategies for balancing reads across primary and standby database
* Managing failover
* Troubleshooting and gotchas
Please join us for an enlightening discussion a set of PostgreSQL features that are interesting to a wide range of PostgreSQL users.
The document discusses lessons learned from setting up and maintaining a PostgreSQL cluster for a data analytics platform. It describes four stories where problems arose: 1) Implementing automatic failover using Repmgr when the master node failed, 2) The disk filling up faster than expected due to PostgreSQL's MVCC implementation, 3) Being unable to add a new standby node due to missing WAL segments, and 4) Long running queries on the standby node causing conflicts with replication. The key lessons are around using the right tools like Repmgr for replication management, tuning autovacuum, archiving WALs, and addressing hardware limitations for analytics workloads.
Out of the box replication in postgres 9.4Denish Patel
This document provides an overview of setting up out of the box replication in PostgreSQL 9.4 without third party tools. It discusses write-ahead logs (WAL), replication slots, pg_basebackup, and pg_receivexlog. The document then demonstrates setting up replication on VMs with pg_basebackup to initialize a standby server, configuration of primary and standby servers, and monitoring of replication.
JOnConf - A CDC use-case: designing an Evergreen CacheNicolas Fränkel
This document discusses using change data capture (CDC) and Hazelcast Jet to build an evergreen cache that remains in sync with a database. It covers alternatives to cache invalidation like polling and triggers, introduces CDC and the Debezium implementation, and proposes a Jet job that watches database change events, analyzes them, and updates the cache accordingly to solve the cache freshness problem.
Five more ways to potentially break a Ceph cluster are described:
1. Under- or over-estimating the capabilities of automation tools when deploying or updating Ceph can cause issues like removing all monitors.
2. Running Ceph with a minimum size of 1 reduces durability since there is no redundancy if a single object store fails.
3. Not fully completing an update, such as enabling msgr v2 without updating OSDs, can leave the cluster in an inconsistent state.
4. Configuring multiple RADOS gateways with the same ID behind a load balancer causes performance issues as the active gateways change.
5. Blindly trusting the placement group autoscaler without considering the hardware, such
KSCOPE 2013: Exadata Consolidation Success StoryKristofferson A
This document summarizes an Exadata consolidation success story. It describes how three Exadata clusters were consolidated to host 60 databases total. Tools and methodology used included gathering utilization metrics, creating a provisioning plan, implementing the plan, and auditing. The document describes some "war stories" including resolving a slow HR time entry system through SQL profiling, addressing a memory exhaustion issue from an OBIEE report, and using I/O resource management to prioritize critical processes when storage cells became saturated.
The document provides an overview of the Aerospike architecture, including the client, cluster, storage, primary and secondary indexes, RAM, flash storage, and cross datacenter replication (XDR). The Aerospike architecture aims to handle extremely high read/write rates over persistent data at low latency while ensuring consistency and scalability across datacenters with no downtime.
Ceph Day Netherlands - Ceph Management and Monitoring with openATTIC 3.x Ceph Community
openATTIC is an open source Ceph management and monitoring GUI tool that aims to be useful for administrators and scale well. Version 3.x features a major code refactoring with a Ceph-only focus, statelessness, simplified installation, and replaced Nagios with Prometheus and Grafana for monitoring. It adds management of Ceph Object Gateway, iSCSI targets, and NFS shares, supports new Luminous features, and improves pool and RBD management. DeepSea provides automatic Ceph cluster deployment, configuration, and lifecycle management using SaltStack.
C-mode refers to an OS mode in ONTAP 8.0 and beyond where two or more controllers operate as a single shared resource pool. In c-mode, volumes can span multiple storage systems and clients see a single global namespace rather than individual shares. Key features of c-mode include non-disruptive operations where volumes can be moved between nodes and aggregates transparently, with no change to the client view of the filesystem. The major commands for storage failover in c-mode are 'storage failover giveback' to return ownership and 'storage failover show' to check the status of aggregates.
RADOS improvements and roadmap - Greg Farnum, Josh Durgin, Kefu ChaiCeph Community
Cephalocon APAC 2018
March 22-23, 2018 - Beijing, China
Greg Farnum,Red Hat RADOS Core Developer
Josh Durgin, Red Hat RADOS Lead
Kefu Chai, Red Hat Senior Software Engineer
This document provides an overview of CPU performance monitoring and capacity planning. It discusses how to compare CPU speeds using benchmarks like TPC-C and SPECint_rate2006. It also explains the difference between cores and threads, and reviews different CPU events that can be monitored like CPU utilization, wait time, and scheduling. The document demonstrates how to analyze CPU performance when migrating platforms and consolidating environments.
What every data programmer needs to know about disksiammutex
Disk I/O is much slower than memory access. When reading from a file, the OS first checks the page cache in memory, which has nanosecond access times, before needing to physically access the disk with millisecond seek times. When writing to a file, pages are marked as dirty in the cache before being written to disk asynchronously by flush threads. Fsync forces an immediate write to disk, but caches can lie and data may not be safely written. Virtualized environments add additional layers of caching that can affect performance unpredictably. Hardware techniques like SSDs, RAID controllers, and dedicated flash cards can improve I/O speeds.
The document discusses CPU performance monitoring and capacity planning. It begins with an overview of different CPU events that can be monitored like CPU usage, CPU wait, and CPU scheduler. It then discusses various tools that can be used to monitor CPU usage on the operating system and database level. Finally, it covers methods to compare CPU speeds between different systems, including using published benchmarks and doing actual benchmarking with tools like cputoolkit.
Basic concepts and high level configuration. This is a basic overview of the Aerospike database and presents an introduction to configuring the database service.
Find the full webinar with audio here - http://www.aerospike.com/webinars
PGConf.ASIA 2019 Bali - Setup a High-Availability and Load Balancing PostgreS...Equnix Business Solutions
PGConf.ASIA 2019 Bali - 10 September 2019
Speaker: Bo Peng
Room: SQL
Title: Setup a High-Availability and Load Balancing PostgreSQL Cluster - New Features of Pgpool-II 4.1
FeedBurner is a feed management provider that has scaled to serve 270,000 feeds and 100 million hits per day. The document discusses FeedBurner's scaling history and the solutions they implemented to address various scalability problems, including using load balancers and health checks, caching with Memcached, batch processing, partitioning tables, and implementing a disaster recovery plan using a "Panic App" to download and synchronize feeds.
This document discusses Redis persistence options including RDB snapshots, AOF logging, and how to configure each. RDB takes point-in-time snapshots of the dataset at intervals while AOF logs every write operation. Both can be combined or disabled. RDB is best for disaster recovery due to its single compact file but AOF provides more durability with configurable fsync policies and logs all operations in an easy to parse format. Configuration involves changing settings in the redis.conf file.
Building tungsten-clusters-with-postgre sql-hot-standby-and-streaming-replica...Command Prompt., Inc
Alex Alexander & Linas Virbalas
Hot standby and streaming replication will move the needle forward for high availability and scaling for a wide number of applications. Tungsten already supports clustering using warm standby. In this talk we will describe how to build clusters using the new PostgreSQL features and give our report from the trenches.
This talk will cover how hot standby and streaming replication work from a user perspective, then dive into a description of how to use them, taking Tungsten as an example. We'll cover the following issues:
* Configuration of warm standby and streaming replication
* Provisioning new standby instances
* Strategies for balancing reads across primary and standby database
* Managing failover
* Troubleshooting and gotchas
Please join us for an enlightening discussion a set of PostgreSQL features that are interesting to a wide range of PostgreSQL users.
The document discusses lessons learned from setting up and maintaining a PostgreSQL cluster for a data analytics platform. It describes four stories where problems arose: 1) Implementing automatic failover using Repmgr when the master node failed, 2) The disk filling up faster than expected due to PostgreSQL's MVCC implementation, 3) Being unable to add a new standby node due to missing WAL segments, and 4) Long running queries on the standby node causing conflicts with replication. The key lessons are around using the right tools like Repmgr for replication management, tuning autovacuum, archiving WALs, and addressing hardware limitations for analytics workloads.
Out of the box replication in postgres 9.4Denish Patel
This document provides an overview of setting up out of the box replication in PostgreSQL 9.4 without third party tools. It discusses write-ahead logs (WAL), replication slots, pg_basebackup, and pg_receivexlog. The document then demonstrates setting up replication on VMs with pg_basebackup to initialize a standby server, configuration of primary and standby servers, and monitoring of replication.
JOnConf - A CDC use-case: designing an Evergreen CacheNicolas Fränkel
This document discusses using change data capture (CDC) and Hazelcast Jet to build an evergreen cache that remains in sync with a database. It covers alternatives to cache invalidation like polling and triggers, introduces CDC and the Debezium implementation, and proposes a Jet job that watches database change events, analyzes them, and updates the cache accordingly to solve the cache freshness problem.
This project report describes the development of an application on the DaVinci platform under the guidance of Prof. TK Dan. Akash Sahoo and Abhijit Tripathy, 7th semester B.Tech students, developed an application to take advantage of the DaVinci's integrated ARM and TMS320C64x+ DSP cores. They ported MontaVista Linux and DSP/BIOS to the DaVinci evaluation module board to enable the application and provide OS support across the hybrid processor system.
Kellyn Pot'Vin-Gorman presented on copy data management and virtualization for DBAs. She discussed how virtualization can be used to provision databases more quickly and easily for tasks like patching and testing without needing to copy large amounts of physical data. She also covered how command line interfaces can be used to automate some of these processes.
The document discusses opportunities for using cloud and virtualization technologies. It describes how data virtualization can help optimize copy data management tasks like provisioning databases and refreshing environments. Virtualization allows creating many virtual copies of databases that take up minimal storage space since they share data blocks. This simplifies tasks like patching databases. The document also discusses best practices for migrating databases to the cloud and using data virtualization to enhance the migration process.
Sql server 2019 New Features by Yevhen NedaskivskyiAlex Tumanoff
SQL Server 2019 introduces several new high availability and disaster recovery features such as support for up to 5 synchronous replicas in an Always On availability group and improved connection redirection capabilities. It also enhances PolyBase integration and provides new options for certificate management across instances. Additional new features include support for persistent memory, columnstore index improvements, and resumable online index operations.
OSCONF Koshi - Zero downtime deployment with Kubernetes, Flyway and Spring BootNicolas Fränkel
Kubernetes allows a lot. After discovering its features, it’s easy to think it can magically transform your application deployment process into a painless no-event. For Hello World applications, that is the case. Unfortunately, not many of us do deploy such applications day-to-day. You need to think about application backward compatibility, possible rollback, database schema migration, etc. I believe the later is one of the biggest pain point. In this talk, I’ll demo how to update a Spring Boot app deployed on a Kubernetes cluster with a non-trivial database schema migration with the help of Flyway, while keeping the service up during the entire update process.
This document discusses 101 mistakes that FINN.no learned from in running Apache Kafka. It begins with an introduction to Kafka and why FINN.no chose to use it. It then discusses FINN.no's Kafka architecture and usage over time as their implementation grew. The document outlines several common mistakes made including not distinguishing between internal and external data, lack of external schema definition, using a single configuration for all topics, defaulting to 128 partitions, and running Zookeeper on overloaded nodes. Each mistake is explained, potential consequences are given, better solutions are proposed, and what FINN.no has done to address them.
Testing Delphix: easy data virtualizationFranck Pachot
The document summarizes the author's testing of the Delphix data virtualization software. Some key points:
- Delphix allows users to easily provision virtual copies of database sources on demand for tasks like testing, development, and disaster recovery.
- It works by maintaining incremental snapshots of source databases and virtualizing the data access. Copies can be provisioned in minutes and rewound to past points in time.
- The author demonstrated provisioning a copy of an Oracle database using Delphix and found the process very simple. Delphix integrates deeply with databases.
- Use cases include giving databases to each tester/developer, enabling continuous integration testing, creating QA environments with real
Cloud Bursting 101: What to do When Cloud Computing Demand Exceeds CapacityAvere Systems
Slides from live webinar hosted on February 16, 2017.
Deploying applications locally and bursting them to the cloud for compute may seem difficult, especially when working with high-performance, critical information. However, using cloudbursts to offset peaks in demand can bring big benefits and kudos from organizational leaders always looking to do more with less.
After this short webinar, you’ll be ready to:
- Explain what cloud bursting is and what workloads it is best for
- Identify efficiencies in applying cloud bursting to high-performance applications
- Understand how cloud computing services access your data and consume it during burst cycles
- Share three real-world use cases of companies leveraging cloud bursting for measurable efficiencies
- Have seen a demonstration of how it works
Presenters will build an actionable framework in just thirty minutes and then take questions.
So, you know how to deploy your code, what about your database? This talk will go through deploying your database with LiquiBase and DBDeploy a non-framework based approach to handling migrations of DDL and DML.
The document describes an IT solution blueprint for building efficient disaster recovery (DR) solutions using a cookie cutter approach. It outlines a DR solution built on VMware, NetBackup, and NEC servers/storage using virtualization. The solution is designed to meet tight RPO and RTO requirements of 5 minutes. It demonstrates failover and failback workflows to move operations from a production site to DR site and back. The blueprint approach aims to reuse proven architectural principles and building blocks to deliver more sophisticated, reliable solutions cost effectively.
Using DC/OS for Continuous Delivery - DevPulseCon 2017pleia2
The document discusses new features in DC/OS 1.9 including improved operations through remote container shells, unified logging and metrics, deployment failure debugging, and upgrades/configuration updates. It also covers new workload features such as pods for co-locating containers and GPU-based scheduling. A variety of data services are now available in the DC/OS ecosystem.
At a point in the past, it was forecast that Java would die, but the JVM platform would be its legacy. And in fact, for a long time, the JVM has been tremendously successful. Wikipedia itself lists a bunch of languages that run on it, some of them close to Java e.g. Kotlin, some of them very remote e.g. Clojure.
But nowadays, the Cloud is becoming ubiquitous. Containerization is the way to go to alleviate some of the vendor lock-in issues. Kubernetes is a de facto platform. If a container needs to be killed for whatever reason (resource consumption, unhealthy, etc.), a new one needs to replace it as fast as possible. In that context, the JVM seems to be a dead-end: its startup time is huge in comparison to a native process. Likewise, it consumes a lot of memory that just increase the monthly bill.
What does that mean for us developers? Has all the time spent in learning the JVM ecosystem been invested with no hope of return over investment? Shall we need to invest even more time in new languages, frameworks, libraries, etc.? That is one possibility for sure. But we can also leverage our existing knowledge, and embrace the Cloud and containers ways with the help of some tools.
In this talk, I’ll create a simple URL shortener with a “standard” stack: Kotlin, JAX-RS and Hazelcast. Then, with the help of Quarkus and GraalVM, I’ll turn this application into a native executable with all Cloud/Container related work has been moved to the build process.
The document describes several scripts used for automating storage operations tasks. The scripts collect data from different storage devices like VNX, XtremIO, Unity, Brocade switches, and NetApp. They help reduce manual efforts by providing centralized monitoring, health checks, utilization reports, backups and password changes. The scripts are written in shells, Perl, and HTML stored on an ls2927 server for easy access and management.
Similar to GeekcampSG 2020 - A Change-Data-Capture use-case: designing an evergreen cache (20)
SnowCamp - Adding search to a legacy applicationNicolas Fränkel
Most applications evolve to a point where they need to provide search capabilities. But updating an application is always a risk. Plus, sometimes, you don’t have access to the source code. The easiest way to access the data is by getting them directly from the database.
The initial load is the easiest step. However, how do you keep the search index in sync with the database? How do you keep the latency between the search store and the source of truth, so your users don’t have to wait for the next run of the batch to access the newest changes?
In this live coding session, we will show you how you can solve this issue by connecting Elasticsearch to the database with a touch of Hazelcast.
On dit que GitHub est le CV d'un développeur. Un rapide coup d'œil à votre historique de commit et les recruteurs savent tout de vous. Cette approche comporte quelques problèmes. La plupart des entreprises ne publient même pas leur code sous une licence Open Source. Si vous travaillez pour l'une d'entre elles, et si vous n'êtes pas un développeur Open Source les soirs et les week-ends, alors vous n'avez aucune chance.
Récemment, GitHub a permis un certain degré de personnalisation de son profil. Ainsi, même si votre historique de commit a plus de blanc que de vert, vous pouvez fournir un bon point d'entrée pour les employeurs potentiels. Mais ça ne vaut que l'effort que vous y mettez et les données perdent leur valeur rapidement. Pourtant, avec un peu de travail et l'aide d'outils d'automatisation (tels que GitHub Actions), vous pouvez présenter un profil toujours à jour.
Zero-downtime deployment on Kubernetes with HazelcastNicolas Fränkel
Kubernetes allows a lot. After discovering its features, it’s easy to think it can magically transform your application deployment process into a painless no-event. For Hello World applications, that is the case. Unfortunately, not many of us do deploy such applications day-to-day because we need to handle state. Though it would be much easier to have stateless apps, and despite our best efforts in this direction, state is found in (at least) two places: sessions and databases.
You need to think keeping the state while stopping and starting application nodes. In this talk, I’ll demo how to update a Spring Boot app deployed on a Kubernetes cluster with a non-trivial database schema change with the help of Hazelcast, while keeping the service up during the entire update process.
BigData conference - Introduction to stream processingNicolas Fränkel
This document discusses stream processing and summarizes a presentation about the topic. It introduces Hazelcast Jet as a stream processing engine and covers open data standards like GTFS. It also describes a demo that uses GTFS data to enrich public transit vehicle position updates in real-time using Hazelcast Jet. The presentation discusses streaming approaches, benefits over batch processing, and provides an overview of stream processing concepts.
ADDO - Your own Kubernetes controller, not only in GoNicolas Fränkel
In Kubernetes, operators allow the API to be extended to your heart content. If one task requires too much YAML, it’s easy to create an operator to take care of the repetitive cruft, and only require a minimum amount of YAML.
On the other hand, since its beginnings, the Go language has been advertised as closer to the hardware, and is now ubiquitous in low-level programming. Kubernetes has been rewritten from Java to Go, and its whole ecosystem revolves around Go. For that reason, It’s only natural that Kubernetes provides a Go-based framework to create your own operator. While it makes sense, it requires organizations willing to go down this road to have Go developers, and/or train their teams in Go. While perfectly acceptable, this is not the only option. In fact, since Kubernetes is based on REST, why settle for Go and not use your own favorite language?
In this talk, I’ll describe what is an operator, how they work, how to design one, and finally demo a Java-based operator that is as good as a Go one.
TestCon Europe - Mutation Testing to the Rescue of Your TestsNicolas Fränkel
Unit testing ensures your production code is relevant. But what does ensure your testing code is relevant? Come discover mutation testing and make sure your never forget another assert again.
In the realm of testing, the code coverage metrics is the most often talked about. However, it doesn’t mean that the test has been useful or even that an assert has been coded. Mutation testing is a strategy to make sure that the test code is relevant.
In this talk, Nicolas will explain how Code Coverage is computed and what its inherent flaw is. Afterwards, he will describe how Mutation Testing work and how it helps pointing out code that is tested but leave out corner cases. He will also demo PIT, a Java production-grade framework that enables Mutation Testing.
OSCONF Jaipur - A Hitchhiker's Tour to Containerizing a Java applicationNicolas Fränkel
As “the Cloud” becomes more and more widespread, now is a good time to assess how you can containerize your Java application. I assume you’re able to write a a Dockerfile around the generated JAR. However, each time the application’s code will change, the whole image will need to be rebuilt. If you’re deploying to a local Kubernetes cluster environment, this increases that much the length of the feedback loop.
In this demo-based talk, I’ll present different ways to get your Java app in a container: Dockerfile, Jib, and Cloud Native Buildpacks. We will also have a look at what kind of Docker image they generate, how they layer the images, whether those images are compatible with skaffold, etc.
JavaDay Istanbul - 3 improvements in your microservices architectureNicolas Fränkel
While a microservices architecture is more scalable than a monolith, it has a direct hit on performance.
To cope with that, one performance improvement is to set up a cache. It can be configured for database access, for REST calls or just to store session state across a cluster of server nodes. In this demo-based talk, I’ll show how Hazelcast In-Memory Data Grid can help you in each one of those areas and how to configure it. Hint: it’s much easier than one would expect.
Devclub.lv - Introduction to stream processingNicolas Fränkel
While “software is eating the world”, those who are able to best manage the huge mass of data will emerge out on the top.
The batch processing model has been faithfully serving us for decades. However, it might have reached the end of its usefulness for all but some very specific use-cases. As the pace of businesses increases, most of the time, decision-makers prefer slightly wrong data sooner, than 100% accurate data later. Stream processing – or data streaming – exactly matches this usage: instead of managing the entire bulk of data, manage pieces of them as soon as they become available.
This talk will be about the reasons behind the new stream processing model, how it compare to the old batch model, what are their pros and cons, and a list of existing technologies implementing stream processing with their most prominent characteristics. It will contain details of one possible use-case of data streaming that is not possible with batches: display in (near) real-time all trains in Switzerland and their position on a map, beginning with an overview of all the requirements and the design. Finally, using an OpenData endpoint and the Hazelcast platform,showing a working demo implementation of it.
Java.IL - Your own Kubernetes controller, not only in Go!Nicolas Fränkel
In Kubernetes, operators allow the API to be extended to your heart content. If one task requires too much YAML, it’s easy to create an operator to take care of the repetitive cruft, and only require a minimum amount of YAML.
On the other hand, since its beginnings, the Go language has been advertised as closer to the hardware, and is now ubiquitous in low-level programming. Kubernetes has been rewritten from Java to Go, and its whole ecosystem revolves around Go. For that reason, It’s only natural that Kubernetes provides a Go-based framework to create your own operator. While it makes sense, it requires organizations willing to go down this road to have Go developers, and/or train their teams in Go. While perfectly acceptable, this is not the only option. In fact, since Kubernetes is based on REST, why settle for Go and not use your own favorite language?
In this talk, I’ll describe what is an operator, how they work, how to design one, and finally demo a Java-based operator that is as good as a Go one.
The document discusses stream processing and provides an overview of Hazelcast Jet. It begins with explaining why streaming is useful and describes different streaming approaches like event-driven programming. It then provides details on Hazelcast Jet, including its concepts of pipelines and jobs. The document also discusses open data standards like GTFS and demonstrates a sample streaming pipeline that enriches public transportation data from open APIs.
London Java Community - An Experiment in Continuous Deployment of JVM applica...Nicolas Fränkel
A couple of years ago, continuous integration in the JVM ecosystem meant Jenkins. Since that time, a lot of other tools have been made available. But new tools don’t mean new features, just new ways. Besides that, what about continuous deployment? There’s no tool that allows deploying new versions of a JVM-based application without downtime. The only way to achieve zero downtime is to have multiple nodes deployed on a platform, and let that platform achieve that e.g. Kubernetes.
And yet, achieving true continuous deployment of bytecode on one single JVM instance is possible if one changes one’s way of looking at things. What if the compilation could be seen as changes? What if those changes could be stored in a data store, and a listener on this data store could stream those changes to the running production JVM via the Attach API?
In this talk, we'll demo exactly that using Hazelcast and Hazelcast Jet - but it’s possible to re-use the principles that will be shown using other streaming technologies.
OSCONF - Your own Kubernetes controller: not only in GoNicolas Fränkel
This document discusses creating Kubernetes operators and controllers using different programming languages besides Go. It suggests that a Java-based controller is possible using the GraalVM, which allows creating native executables from Java bytecode. Key points covered include what controllers and operators are, that no specific technology stack is required, and that the JVM could be a good option for controller development with GraalVM's support for polyglot programming and creating native applications.
vKUG - Migrating Spring Boot apps from annotation-based config to FunctionalNicolas Fränkel
Migrating Spring Boot apps from annotation-based config to Functional with Kotlin - Nicolas Fränkel
In the latest years, there has been some push-back against frameworks, and more specifically annotations: some call them magic. Obviously, they make understanding the flow of the application harder.
Spring and Spring Boot latest versions go along this trend, by offering an additional way to configure beans with explicit code instead of annotations. It’s declarative in the sense it looks like configuration, though it’s based on Domain-Specific Language(s). This talk aims to demo a step-by-step process to achieve that.
While a microservices architecture is more scalable than a monolith, it has a direct hit on performance.
To cope with that, one performance improvement is to set up a cache. It can be configured for database access, for REST calls or just to store session state across a cluster of server nodes. In this demo-based talk, I’ll show how Hazelcast In-Memory Data Grid can help you in each one of those areas and how to configure it.
Hint: it’s much easier than one would expect.
AllTheTalks.online - A Streaming Use-Case: And Experiment in Continuous Deplo...Nicolas Fränkel
Nicolas Fränkel proposes an experimental use case of continuous deployment of bytecode by dynamically loading Java agents into a running JVM using the Attach API. The agent would use the Instrumentation API to modify bytecode of running applications by redefining classes in order to deploy changes made by a continuous integration pipeline without restarting the JVM. Current limitations include a lack of tagging and an inability to add/remove fields or methods via redefinition. Improvements could include streaming changes directly from the CI pipeline and adding metadata to classes from the Git tag.
ING Meetup - Migrating Spring Boot Config Annotations to Functional with KotlinNicolas Fränkel
In the latest years, there has been some push-back against frameworks, and more specifically annotations: some call them magic. Obviously, they make understanding the flow of the application harder. Spring and Spring Boot latest versions go along this trend, by offering an additional way to configure beans with explicit code instead of annotations. It's declarative in the sense it looks like configuration, though it's based on Domain-Specific Language(s). This talk aims to demo a step-by-step process to achieve that.
SouJava- 3 easy performance improvements in your microservices architectureNicolas Fränkel
While a microservices architecture is more scalable than a monolith, it has a direct hit on performance.
To cope with that, one performance improvement is to set up a cache. It can be configured for database access, for REST calls or just to store session state across a cluster of server nodes. In this demo-based talk, I'll show how Hazelcast In-Memory Data Grid can help you in each one of those areas and how to configure it. Hint: it's much easier than one would expect.
While “software is eating the world”, those who are able to best manage the huge mass of data will emerge out on the top.
The batch processing model has been faithfully serving us for decades. However, it might have reached the end of its usefulness for all but some very specific use-cases. As the pace of businesses increases, most of the time, decision makers prefer slightly wrong data sooner, than 100% accurate data later. Stream processing - or data streaming - exactly matches this usage: instead of managing the entire bulk of data, manage pieces of them as soon as they become available.
In this talk, I’ll define the context in which the old batch processing model was born, the reasons that are behind the new stream processing one, how they compare, what are their pros and cons, and a list of existing technologies implementing the latter with their most prominent characteristics. I’ll conclude by describing in detail one possible use-case of data streaming that is not possible with batches: display in (near) real-time all trains in Switzerland and their position on a map. I’ll go through the all the requirements and the design. Finally, using an OpenData endpoint and the Hazelcast platform, I’ll try to impress attendees with a working demo implementation of it.
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
Project Management: The Role of Project Dashboards.pdfKarya Keeper
Project management is a crucial aspect of any organization, ensuring that projects are completed efficiently and effectively. One of the key tools used in project management is the project dashboard, which provides a comprehensive view of project progress and performance. In this article, we will explore the role of project dashboards in project management, highlighting their key features and benefits.
Consistent toolbox talks are critical for maintaining workplace safety, as they provide regular opportunities to address specific hazards and reinforce safe practices.
These brief, focused sessions ensure that safety is a continual conversation rather than a one-time event, which helps keep safety protocols fresh in employees' minds. Studies have shown that shorter, more frequent training sessions are more effective for retention and behavior change compared to longer, infrequent sessions.
Engaging workers regularly, toolbox talks promote a culture of safety, empower employees to voice concerns, and ultimately reduce the likelihood of accidents and injuries on site.
The traditional method of conducting safety talks with paper documents and lengthy meetings is not only time-consuming but also less effective. Manual tracking of attendance and compliance is prone to errors and inconsistencies, leading to gaps in safety communication and potential non-compliance with OSHA regulations. Switching to a digital solution like Safelyio offers significant advantages.
Safelyio automates the delivery and documentation of safety talks, ensuring consistency and accessibility. The microlearning approach breaks down complex safety protocols into manageable, bite-sized pieces, making it easier for employees to absorb and retain information.
This method minimizes disruptions to work schedules, eliminates the hassle of paperwork, and ensures that all safety communications are tracked and recorded accurately. Ultimately, using a digital platform like Safelyio enhances engagement, compliance, and overall safety performance on site. https://safelyio.com/
Everything You Need to Know About X-Sign: The eSign Functionality of XfilesPr...XfilesPro
Wondering how X-Sign gained popularity in a quick time span? This eSign functionality of XfilesPro DocuPrime has many advancements to offer for Salesforce users. Explore them now!
Malibou Pitch Deck For Its €3M Seed Roundsjcobrien
French start-up Malibou raised a €3 million Seed Round to develop its payroll and human resources
management platform for VSEs and SMEs. The financing round was led by investors Breega, Y Combinator, and FCVC.
E-Invoicing Implementation: A Step-by-Step Guide for Saudi Arabian CompaniesQuickdice ERP
Explore the seamless transition to e-invoicing with this comprehensive guide tailored for Saudi Arabian businesses. Navigate the process effortlessly with step-by-step instructions designed to streamline implementation and enhance efficiency.
8 Best Automated Android App Testing Tool and Framework in 2024.pdfkalichargn70th171
Regarding mobile operating systems, two major players dominate our thoughts: Android and iPhone. With Android leading the market, software development companies are focused on delivering apps compatible with this OS. Ensuring an app's functionality across various Android devices, OS versions, and hardware specifications is critical, making Android app testing essential.
Unveiling the Advantages of Agile Software Development.pdfbrainerhub1
Learn about Agile Software Development's advantages. Simplify your workflow to spur quicker innovation. Jump right in! We have also discussed the advantages.
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
Liberarsi dai framework con i Web Component.pptxMassimo Artizzu
In Italian
Presentazione sulle feature e l'utilizzo dei Web Component nell sviluppo di pagine e applicazioni web. Racconto delle ragioni storiche dell'avvento dei Web Component. Evidenziazione dei vantaggi e delle sfide poste, indicazione delle best practices, con particolare accento sulla possibilità di usare web component per facilitare la migrazione delle proprie applicazioni verso nuovi stack tecnologici.
14 th Edition of International conference on computer visionShulagnaSarkar2
About the event
14th Edition of International conference on computer vision
Computer conferences organized by ScienceFather group. ScienceFather takes the privilege to invite speakers participants students delegates and exhibitors from across the globe to its International Conference on computer conferences to be held in the Various Beautiful cites of the world. computer conferences are a discussion of common Inventions-related issues and additionally trade information share proof thoughts and insight into advanced developments in the science inventions service system. New technology may create many materials and devices with a vast range of applications such as in Science medicine electronics biomaterials energy production and consumer products.
Nomination are Open!! Don't Miss it
Visit: computer.scifat.com
Award Nomination: https://x-i.me/ishnom
Conference Submission: https://x-i.me/anicon
For Enquiry: Computer@scifat.com
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
2. @nicolas_frankel
Me, myself and I
Former developer, team lead, architect,
blah-blah
Developer Advocate
Interested in CDC and data streaming
3. @nicolas_frankel
Hazelcast
HAZELCAST IMDG is an operational,
in-memory, distributed computing
platform that manages data using
in-memory storage and performs
execution for breakthrough
and scale.
HAZELCAST JET is the ultra
fast, application embeddable,
3rd generation stream
processing engine for low
latency batch and stream
processing.
4.
5. @nicolas_frankel
Agenda
1. Why cache?
2. Alternatives to keeping the cache in sync
3. Change-Data-Capture (CDC)
4. Debezium, a CDC implementation
5. Hazelcast Jet + Debezium
6. Demo!
8. @nicolas_frankel
Aye, there’s the rub!
A new component
writes to the
database
E.g.: a table holding
references needs to
be updated every
now and then
11. @nicolas_frankel
Cache eviction vs Time-To-Live
Cache eviction: which entities to evict
when the cache is full
• Least Recently Used
• Least Frequently Used
TTL: how long will an entity be kept in
the cache
12. @nicolas_frankel
Choosing the “correct” TTL
Less frequent than the update frequency
• Miss updates
More frequent than the update frequency
• Waste resources
14. @nicolas_frankel
Event-driven for the win!
1. If no writes happen, there's no need to
update the cache
2. If a write happens, then the relevant
cache item should be updated
accordingly
16. @nicolas_frankel
The example of MySQL: User-defined function
Functions must be written in C++
The OS must support dynamic loading
Becomes part of the running server
• Bound by all constraints that apply to
writing server code
Etc.
-- https://dev.mysql.com/doc/refman/8.0/en/adding-udf.html
17. @nicolas_frankel
lib_mysqludf_sys
UDF library with functions to interact with the operating system
CREATE TRIGGER MyTrigger
AFTER INSERT ON MyTable
FOR EACH ROW
BEGIN
DECLARE cmd CHAR(255);
DECLARE result INT(10);
SET cmd = CONCAT('update_row', '1');
SET result = sys_exec(cmd);
END;
-- https://github.com/mysqludf/lib_mysqludf_sys
19. @nicolas_frankel
Change-Data-Capture
“In databases, Change Data Capture is a set
of software design patterns used to determine
and track the data that has changed so that
action can be taken using the changed data.
CDC is an approach to data integration that is
based on the identification, capture and
delivery of the changes made to enterprise
data sources.”
-- https://en.wikipedia.org/wiki/Change_data_capture
20. @nicolas_frankel
CDC implementation options
1. Polling + Timestamps on rows
2. Polling + Version numbers on rows
3. Polling + Status indicators on rows
4. Triggers on tables
5. Log scanners
-- https://en.wikipedia.org/wiki/Change_data_capture
21. @nicolas_frankel
“Turning the database inside out” - Martin Kleppman
-- https://www.confluent.io/blog/turning-the-database-inside-out-with-apache-samza/
22. @nicolas_frankel
What is a transaction/binary/etc. log?
“The binary log contains ‘events’ that
describe database changes such as table
creation operations or changes to table
data.”
-- https://dev.mysql.com/doc/refman/8.0/en/binary-log.html
27. @nicolas_frankel
Debezium to the rescue
Java-based abstraction layer for CDC
Provided by Red Hat
Apache v2 licensed
Very skewed toward Kafka
32. @nicolas_frankel
Deployment modes
// Create new cluster member
JetInstance jet = Jet.newJetInstance();
// Connect to running cluster
JetInstance jet = Jet.newJetClient();
Client/ServerEmbedded
Java API
Application
Java API
Application
Java API
Application
Client API
Application
Client API
Application
Client API
Application
Client API
Application
33. @nicolas_frankel
Pipeline Job
Declarative code that
defines and links sources,
transforms, and sinks
Platform-specific SDK
Client submits pipeline to
the SPE
Running instance of pipeline
in SPE
SPE executes the pipeline
• Code execution
• Data routing
• Flow control
34. @nicolas_frankel
Back to our use-case
A Jet job:
1. Watches change events in the
database
2. Analyzes the change event
3. Updates the cache accordingly