For years, Rabobank has been actively investing in becoming a real-time, event-driven bank. If you are familiar with banking processes, you will understand that this is not simple. Many banking processes are implemented as batch jobs on not-so-commodity hardware, meaning that any migration effort is immense.
*Find out how Rabobank redesigned Rabo Alerts while continuing to provide a robust and stable alert system for its existing user base
*Learn how the project team managed to achieve a balance between the need to decentralise activity while not losing control
*Understand how Rabobank re-invented a reliable service to meet modern customer expectations
4. In 2015 we started redesigning Rabo Alerts
Alerting Solution
Identify relevant
users
Filter, translate Distribute
Distribute
user/channel specific
message
Other
subscribers
Distribute
user/channel specific
message
Distribute
user/channel specific
message
User alert
preferences
Contracts
Compliance &
Business rules
User channel
preferences
Public
interface?
Public
interface?
Subscription Service
Complex Event
Processing
Business Event Bus
5. A marketplace where applications exchange Business Events at the moment they occur.
A Business Event is a something that happens or takes place to which organizational entities might want to react.
Objective: BEB provides all the tools,
mechanisms and processes to support
organization-wide event sourcing.
Collaboration between Rabobank
and Axual for design, build & run.
What is Business Event Bus?
BusinessEventBus
Marketing Sales Payments Accounting
Savings Loans Mortgages
Customer
Service
CRM
Cards
Investments
IT
6. High Confidentiality, Medium Integrity, High Availability
“Sharing events” caught on quickly
Payments Stock Orders Website
RelevanceMortgages
Personal
Finance
Management
Rabo Alerts
Business Event Bus
7. What makes an ideal streaming platform?
Characteristics
• Always available
• Low maintenance
• Self-Service
• Easy to use
• Standardized
…while respecting
enterprise requirements
• Integration with Corporate
Directory
• Data Security
• Data Governance
9. High Availability
Client Application
Axual Client Library
DC or CloudDC or Cloud
Multi-directional
message / offset
replication
Apache Kafka
Confluent Platform
Apache Kafka
Confluent Platform
Runs in multiple data centers
• Each DC runs an autonomous Kafka cluster
• Messages are replicated within a cluster
• Clusters withstand node failures
One logical platform
• Axual extended Kafka across data centers
• Messages are replicated symmetrically using a
push-out mechanism
• Upon DC failure, apps are routed to another DC
Apps’ perspective
• Applications do not know about Kafka clusters
• Apps query the Configuration Provider to find out
where Kafka is and where they should
produce/consume
• They repeat the same query every 10 minutes
• In case of disaster or DC failure, apps are
directed to switch to another cluster to continue
• Also supports scheduled maintenance windows
Business Event Bus
APIAPI
Config Provider
API
10. Security Enforcement
Producer 1 Producer 2 Producer 3
Data Stream
Data Stream
Data Stream
Data Stream
Data Stream
Data Stream
Data Stream
Data Stream
Data Stream
Consumer 1 Consumer 2 Consumer 3 Consumer n
Business Event Bus
Security Mechanisms
• All connections are secured by two-way SSL
• TLS v1.1 and v1.2 are supported, TLS v1.0 is
deprecated and turned off by default
• Certificates are used to authenticate and perform
stream authorization
• It is impossible for an application to gain access to a
stream that it has no rights for
• Mechanisms implemented transparently in Axual
Client Library
Application Catalog
• Central repository registering all known apps
• Every app must declare its SSL certificates
• Certificates must be signed by a trusted Certificate
Authority
Stream Access Rights Management
• Streams are secured through Access Control Lists
• Applications are assigned rights to either produce,
consume or both
• Streams are physically separated from each other,
stored in separate files
11. Data Governance
Objectives
• Maintain control over your streaming
landscape
• Promote reuse of existing data streams
• Allow stream and schema versions to co-
exist
Stream Governance
• Central repository with stream definitions
• Administers stream properties like
ownership, retention time and message
formats
• Deployable to different environments
Schema Governance
• Central repository where schemas are
registered and maintained
• Contains all versions of schema and allows
for comprehensible schema evolution
• Confluent Schema Registry enforces schema
as data contracts at runtime
12. Self-Service
Power to the Teams
• There is a general shift in enterprises towards
DevOps and autonomous teams
• Teams must be empowered to take control over
their applications in every way
• Through Self-Service we want to allow everyone
to create, modify and deploy streams, easily and
securely
Axual UI
• Simple and uniform interface for functional
maintenance
• Allows configuration of Environments,
Applications and Streams
• Strict ownership rules allow regulation of
responsibilities
• Easy deployment of configured data streams
Axual API
• The UI uses a REST API to modify catalog
information
• The API can also be called from CI/CD workflows
13. Platform capabilities
• Publish/Subscribe Messaging
• High Performance
• High Availability
• Multi-tenant & Multi-environment
• Data Governance
• Self-Service
• Data Lineage
• Security
• Integration options & Protocol support
Development capabilities
• Client libraries
• Test tooling
• CI / CD integration
Operations
• On-premise and in Clouds
• Configurable and Automated Deployments
• Monitoring, Alerting & Logging
Logical Architecture
DC or CloudDC or Cloud
API
Multi-directional
message / offset
replication
Data Governance Data Lineage Security
Development Tools
Self Service
High AvailabilityMulti-tenancy
Apache Kafka
Confluent Platform
Apache Kafka
Confluent Platform
API
Business Event Bus
14. Event-driven banking examples
Customer
birthday
Transform a youth account to a
student account when turning 18
BusinessEventBus
Personal alert generation SMS
Fraud detection
Booking on a
payment
account
Customer
logging in
Relevance engine Action
Email
Push
15. Today, events are key assets
Many types of events are now shared, eg:
Transactions
Customer Authorizations
Web analytics
Customer birthdays
Mortgage status
Address book updates
18. • Walking the path learned us more than talking about it
• DevOps is mandatory for the central team to be effective
• Team consists of only 4 people
• Find out in practice what works and what doesn’t
• Improve developer journeys
• Reduce amount of work per data stream
• Automate all the things
• Team focus and passion is crucial
Lessons learned
19. • Use Cases:
• Scale up use with microservices
• API and events collaboration
• More real-time business
• Business Event Bus:
• Scale platform to Cloud
• Multi-Cloud
• Productized as Axual Platform
to other organizations
Future steps
On-premise DC2
Apache Kafka
Confluent Platform
API
On-premise DC1
Apache Kafka
Confluent Platform
API
Azure Region 1
Apache Kafka
Confluent Platform
API
AWS
Apache Kafka
Confluent Platform
API
Google Cloud
Apache Kafka
Confluent Platform
API
Azure Region 2
Apache Kafka
Confluent Platform
API
API
Editor's Notes
VINCENT
Introduction Webcast
For the past years, Rabobank has been actively investing in becoming a real-time, event-driven bank. If you are familiar with banking processes, you will understand that that is quite a step. A lot of banking processes are implemented as batch jobs on not-so-commodity hardware, so the migration effort is immense.
But as said, Rabobank picked up this challenge and defined the Business Event Bus (BEB) as the place where business events from across the organization are shared between applications. They chose Apache Kafka as the main engine underneath and wrote their own BEB client library to facilitate application developers with features like easy message producing/consuming and disaster recovery.
In this webcast we will zoom in on the journey that Rabobank undertook, the challenges faced and how they were overcome. Finally we will wrap up with a summary of where Rabobank is today and an outlook of things to come.
VINCENT
VINCENT
How we got started with Kafka
Design of Kafka-setup started with the redesign of Rabo Alerts, a service that allows Rabobank customers to be alerted whenever interesting financial events occur. A simple example of an event is when a certain amount was debited from or credited to your account, but more complex events also exist. It’s noteworthy to mention that Rabo Alerts is not a new or piloted service. It has been in production for over ten years and is available to millions of account holders. But the former implementation of Rabo Alerts resided on mainframe systems. All processing steps were batch-oriented, where the mainframe would derive alerts to be sent every couple of minutes up to only a few times per day, depending on the alert type. The implementation was very stable and reliable, but there were two issues that Rabobank wanted to solve: (1) lack of flexibility and (2) lack of speed/timeliness.
Flexibility for adapting to new business requirements was low because changing the supported alerts or adding new (and smarter) alerts required a lot of effort. Rabobank’s pace to introduce new features in its online environment has increased heavily in the past years, thus an inflexible alerting solution was becoming increasingly problematic.
Speed/timeliness of alert delivery was also an issue, because it could take the old implementation 5 minutes up to 4-5 hours to deliver alerts to customers (depending on alert type and batch execution windows). Ten years ago one could argue this was fast enough, but today customer expectations are much higher! The time window in which Rabobank can present “relevant information” to the customer is much smaller today than it used to be ten years ago.
So the question was raised on how the existing mechanism could be redesigned to become more extensible and faster. And of course the redesigned Rabo Alerts, too, would need to be robust and stable so that it could properly serve its existing user base of millions of customers.
In order to support Rabo Alerts, we started to investigate the possible options for realization and soon found that there was no suitable mechanism to transport event-type data in real-time from the backend payment systems to the customer-facing applications, which would send out the alerts. That’s when the design of a new platform - dubbed Business Event Bus - was kicked off.
VINCENT
VINCENT
Other use cases that popped up
Stock alerts
Personal Finance Management
Relevance
Click events
VINCENT
VINCENT
HA: withstand data center outages, or entire cluster fails
Sec: strict access control to applications
DG: how to keep in control of streams and schemas
SS: how to push functional maintenance away to other teams
JEROEN
Kafka provides highly available setups using a cluster architecture. Business Event Bus takes this to the next level by setting up a multi-cluster system with data replication between these clusters. An entire cluster can fail and your data will still be secured and available. Moreover, in the case of an entire cluster failure, connecting applications can switch (unknowingly and fully automatically) due to symmetrical replication in combination with the configuration provider for connecting clients.
JEROEN
JEROEN
JEROEN
JEROEN
VINCENT
VINCENT
VINCENT
Some interesting cases
Transaction Cache (fast searching)
Credit card activation
Customer logs / service desk
Fraud detection
Login event that we use a trigger in several places