04 Jun 2019

Real-Time Financial Alerts at Rabobank With Apache Kafka’s Streams API

Rabobank used Kafka Streams APIs to do real-time alerting on financial events for their customers. Learn more about the use case in this blog.

RABOBANK NETHERLANDS

Rabobank is based in the Netherlands with over 900 locations worldwide, 48,000 employees, and €681B in assets. It is a bank by and for customers, a cooperative bank, a socially-responsible bank. It aims to be market leader across all financial markets in the Netherlands. Rabobank is also committed to being a leading bank in the field of food and agriculture worldwide. Rabobank provides financial products and services to millions of customers around the world.

For the past years, Rabobank has been actively investing in becoming a real-time, event-driven bank. If you are familiar with banking processes, you will understand that that is quite a step.  A lot of banking processes are implemented as batch jobs on not-so-commodity hardware, so the migration effort is immense. But as said, Rabobank picked up this challenge and defined the Business Event Bus (BEB) as the place where business events from across the organization are shared between applications. They chose Apache Kafka as the main engine underneath and wrote their own BEB client library to facilitate application developers with features like easy message producing/consuming and disaster recovery.

Rabobank uses an Active-Active Kafka setup, where Kafka clusters in multiple data centers are mirrored symmetrically. Upon data center failure or operator intervention, BEB clients—including the Kafka Streams based applications discussed in this article—may be switched over from one Kafka cluster to another without requiring a restart. This allows for 24×7 continued operations during disaster scenarios and planned maintenance windows. The BEB client library implements the switching mechanisms for producers, consumers and streams applications.

Rabo Alerts is a system formed by a set of microservices that produce, consume and/or stream messages from BEB. All data types and code discussed below can be found in a GitHub repository. This post will simplify source code listings to some extent (removing unused fields for example), but the listings still reflect the actual code running in production.

THE CASE: RABO ALERTS

Rabo Alerts is a service that allows Rabobank customers to be alerted whenever interesting financial events occur. A simple example of an event is when a certain amount was debited from or credited to your account, but more complex events also exist. Alerts can be configured by the customer based on their preferences and sent via three channels: email, SMS and mobile push notifications. It’s noteworthy to mention that Rabo Alerts is not a new or piloted service. It has been in production for over ten years and is available to millions of account holders.

CHALLENGES

The former implementation of Rabo Alerts resided on mainframe systems. All processing steps were batch-oriented, where the mainframe would derive alerts to be sent every couple of minutes up to only a few times per day, depending on the alert type. The implementation was very stable and reliable, but there were two issues that Rabobank wanted to solve: (1) lack of flexibility and (2) lack of speed.

Flexibility for adapting to new business requirements was low because changing the supported alerts or adding new (and smarter) alerts required a lot of effort. Rabobank’s pace to introduce new features in its online environment has increased heavily in the past years, thus an inflexible alerting solution was becoming increasingly problematic.

Speed of alert delivery was also an issue, because it could take the old implementation 5 minutes up to 4-5 hours to deliver alerts to customers (depending on alert type and batch execution windows). Ten years ago one could argue this was fast enough, but today customer expectations are much higher! The time window in which Rabobank can present “relevant information” to the customer is much smaller today than it used to be ten years ago.

So the question was raised on how the existing mechanism could be redesigned to become more extensible and faster. And of course the redesigned Rabo Alerts, too, would need to be robust and stable so that it could properly serve its existing user base of millions of customers.

STARTING SMALL

For the past year we have been redesigning and reimplementing the alerting mechanisms using Kafka and Kafka’s Streams API. Since the entire Rabo Alerts service is quite big, we decided to start with four easy but heavily used alerts:

Each of these alerts can be derived from the stream of payment details from the Current Account systems. Customers can enable these alerts and configure their own threshold per alert. For instance: “send me an SMS when my balance drops below €100” or “send me a push message when someone credits me more than €1000” (often used for salary deposit notifications).

Here are some screenshots that illustrate how Rabo Alerts are configured through the mobile banking app.

ALERTING TOPOLOGY

Our first step was to redesign the alerting process. The basic flow goes like this:

  1. Translate the account number to a list of customers that have read permissions on the account.
    • For every customer, execute the following steps:
  2. Look up if the customer has configured Rabo Alerts for given account number.
  3. If so, check if this account entry matches the customer’s alert criteria.
  4. If so, send the customer an alert on configured channels (email, push and SMS)

Step 1 requires a link with the core banking systems that execute the transactions.
Step 2a requires that we create a lookup table containing all customer permissions of all accounts.
Step 2b requires that we have a lookup table that contains the Rabo Alert settings for all customers.

Using this flow and requirements, we started drawing the following topic flow graph:

All the white boxes in the picture are Kafka topics, listing their Avro key/value data types. Most data types are self-explanatory, but the following data types are worth mentioning:

The blue boxes are standalone applications (one might call them microservices), implemented as runnable jars using Spring Boot and deployed on a managed platform. Together they consist of all necessary functionality to implement Rabo Alerts:

SHOW ME THE CODE

The Kafka Streams code for Alerting consists of only 2 classes.

The first class is the BalanceAlertsTopology. This class defines the main Kafka Streams topology using a given KStreamBuilder. It implements BEB’s TopologyFactory, a custom interface used by BEB’s client library to generate a new Kafka Streams topology after the application starts or when it is directed to switch over to another Kafka cluster (datacenter switch/failover).

KStream<CustomerId, KeyValue<SpecificRecord, OutboundMessage>> addressedMessages =
    builder.<AccountId, AccountEntry>stream(accountEntryStream)
        .leftJoin(accountToCustomerIds, (accountEntry, customerIds) -> {
          if (isNull(customerIds)) {
            return Collections.<KeyValue<CustomerId, AccountEntry>>emptyList();
          } else {
            return customerIds.getCustomerIds().stream()
                .map(customerId -> KeyValue.pair(customerId, accountEntry))
                .collect(toList());
          }
        })
        .flatMap((accountId, accountentryByCustomer) -> accountentryByCustomer)
        .through(customerIdToAccountEntryStream)
        .leftJoin(alertSettings, Pair::with)
        .flatMapValues(
            (Pair<AccountEntry, CustomerAlertSettings> accountEntryAndSettings) ->
                BalanceAlertsGenerator.generateAlerts(
                    accountEntryAndSettings.getValue0(),
                    accountEntryAndSettings.getValue1())
        );

// Send all Email messages from addressedMessages
addressedMessages
    .filter((e, kv) -> kv.key instanceof EmailAddress)
    .map((k, v) -> v)
    .to(emailMessageStream);

// Send all Sms messages from addressedMessages
addressedMessages
    .filter((e, kv) -> kv.key instanceof PhoneNumber)
    .map((k, v) -> v)
    .to(smsMessageStream);

// Send all Push messages from addressedMessages
// (CustomerId is later resolved to a list of customer's mobile devices)
addressedMessages
    .filter((e, kv) -> kv.key instanceof CustomerId)
    .map((k, v) -> v)
    .to(customerPushMessageStream);

The topology defines a number of steps:

The magic of alert generation is implemented in the BalanceAlertsGenerator helper class, called from line 17. Its main method is generateAlerts(), which gets an account entry and alert settings from an authorized customer to view the account. Here’s the code:

public static List<KeyValue<SpecificRecord, OutboundMessage>> generateAlerts(AccountEntry accountEntry,
                                                                             CustomerAlertSettings settings) {
      /* Generates addressed alerts for an AccountEntry, using the alert settings with the following steps:
      *  1) Settings are for a specific account, drop AccountEntries not for this account
      *  2) Match each setting with all alerts to generate appropriate messages
      *  3) Address the generated messages
      */

  if (settings == null) {
    return new ArrayList<>();
  }

  return settings.getAccountAlertSettings().stream()
      .filter(accountAlertSettings -> matchAccount(accountEntry, accountAlertSettings))
      .flatMap(accountAlertSettings -> accountAlertSettings.getSettings().stream())
      .flatMap(accountAlertSetting -> Stream.of(
          generateBalanceAbove(accountEntry, accountAlertSetting),
          generateBalanceBelow(accountEntry, accountAlertSetting),
          generateCreditedAbove(accountEntry, accountAlertSetting),
          generateDebitedAbove(accountEntry, accountAlertSetting))
      )
      .filter(Optional::isPresent).map(Optional::get)
      .flatMap(messageWithChannels -> mapAddresses(messageWithChannels.getValue0(), settings.getAddresses())
          .map(address -> KeyValue.pair(address, messageWithChannels.getValue1())))
      .collect(toList());
}

The method executes these steps:

Other helper methods in the same class are:

Together with a few additional classes to wrap this functionality in a standalone application, that’s all there is to it!

FIRST TEST RUN

After a first rudimentary implementation, we took the setup for a test drive. It was still a question of how fast it would be. Well, it proved to amaze us—and our expectations started out high! The whole round trip from payment order confirmation to the alert arriving on a mobile device takes only one to two seconds, more often being around one rather than two. This round trip includes the time taken by the payment factory (validation of the payment order, transaction processing), so response times may vary somewhat depending on the payment factory workload at that point in time. The whole alerting chain—from an account entry posted on Kafka up to senders pushing out messages to customers—is typically executed within 120 milliseconds. In the sending phase, push alerts are fastest, taking only 100-200 milliseconds to arrive on the customer’s mobile device. Email and SMS are somewhat slower channels, with messages arriving 2-4 seconds after the Senders push them out. Compare that to the old setup, where it would typically take several minutes up to a few hours until an alert would be delivered.

The following video demonstrates the speed of alert delivery using my personal test account. Note that although I use it for testing, this is a regular Rabobank payment account that runs in production!

Rabo Alerts demo

Other blogs

Product 4 days ago

Axual Release Update 2021.2

The summer release 2021.2 is here, it contains many quality and stability improvements and some interesting new features. Read the release blog to find out more.

Jeroen van Disseldorp
Technology 3 months ago

How to overcome the steep learning curve of Apache Kafka

Jeroen van Disseldorp
Technology 3 months ago

Introducing KSML: Kafka Streams for Low Code Environments

Kafka Streams has captured the hearts and minds of many developers that want to develop streaming applications on top of Kafka. But as powerful as the framework is, Kafka Streams has had a hard time getting around the requirement of writing Java code and setting up build pipelines.

Jeroen van Disseldorp

Apache Kafka is great, but what do you do
when great is not good enough?
See what Axual offers on top of Kafka.

Start your free trial
No credit card required