June 27, 2024

Release blog 2024.2 – the Summer release

As the days get longer and the coffee breaks get sunnier, we’re excited to roll out the summer release of Axual Platform 2024.2. This update brings improvements that make managing real-time data as refreshing as a leap into the pool. Our previous release marked the completion of the convergence of Axual Platform and Axual Governance architectures. With the new-and-improved architecture, we have strengthened our whole platform in the past year.

On this page

In this release, we have focused on improving the platform by adding features and ensuring its components are up-to-date and ready. It’s all about enhancing your experience and managing real-time data as refreshing as a cool dip in the pool. Building on the foundation we established in our spring release, we’ve focused on adding features and ensuring our platform is more robust and user-friendly than ever.

One of the key highlights of this release is our new Notification Service. We know how important it is for Data Owners and Application Owners to stay informed and respond promptly to changes and issues.

Embracing Summer with Axual’s Notification Service

Streamlining Kafka Management with Axual’s new notification service

Managing real-time data is challenging, especially when it involves keeping track of numerous Kafka topics and applications. The need for timely information and quick responses is paramount for both Data and Application Owners. That’s why we’re excited to introduce our new Notification Service in the Axual Platform 2024.2 release, designed to simplify and enhance your Kafka management experience.

Understanding the problem

As a Data Owner, you often oversee multiple Kafka topics. Frequently, you get new application requests for access to one of your topics; you may need to be made aware of this request because there’s no notification system in place. As a result, the request sits idle, potentially delaying crucial data processing and business operations.

Similarly, if you’re an Application Owner, you might be frustrated with not knowing whether your request to access a Kafka Topic has been approved or denied. Even more critically, if your connector application fails, immediate action is required to maintain data flow and avoid disruptions, but the issue could be noticed with timely alerts.

A solution based on your feedback

We’ve listened to your feedback and implemented a Notification Service that addresses these challenges head-on. Here’s how it works:

Topic Access Requested: Data Owners will receive an email notification whenever a new application requests access to one of their Kafka topics. This ensures that requests are promptly reviewed and acted upon, preventing unnecessary delays.

Topic Access Approved/Denied: Application Owners will be notified via email when their access requests are approved or denied. This eliminates uncertainty and allows them to plan their next steps accordingly.

Connector Application Failing: In the event of a connector application failure, Application Owners will receive an immediate notification. This allows for swift intervention, minimizing downtime and maintaining data flow integrity.

How it helps

With these notifications in place, both Data Owners and Application Owners can enjoy a more streamlined and efficient workflow. The Notification Service significantly reduces response times, as stakeholders are alerted immediately when action is required. This enhances operational efficiency and removes the need for custom monitoring systems, simplifying your overall Kafka management process.

What’s next?

We’re not stopping here. Shortly, we’ll be adding even more notification events to enhance your user experience further:

New Schema Applied: Data Owners will be notified when a new schema is applied to an existing Kafka Topic, ensuring they stay informed about necessary changes.

Consumer Application Lag: Application Owners will receive alerts when a consumer application’s lag exceeds a certain threshold, allowing for timely adjustments and optimization.

Introducing ProtoBuf Support

What is ProtoBuf?

ProtoBuf is a language-neutral, platform-neutral extensible mechanism for serializing/deserializing structured data. It is used in Kafka Topics to define how a message is written/read.

Understanding the problem

For Data Owners, ProtoBuf offers a streamlined approach to defining the intricate structures of messages within Kafka Topics. No longer bound by traditional methods, they now wield the power to upload .proto files directly into Axual’s Self-Service interface. These files serve as the blueprints, encapsulating the essence of data structures with precision and clarity.

Meanwhile, ProtoBuf helps application owners navigate Kafka’s complexities more efficiently. Armed with the ProtoBuf compiler integrated into Axual’s platform, it’s possible to serialize and deserialize Kafka messages effortlessly. This seamless process not only enhances operational agility but also elevates the robustness of data management across applications.


How we support Protobuf in the platform

We allow the Data Owner to upload a .proto file stored in the Self-Service as a ProtoBuf schema. The Data Owner can then create a Kafka Topic using the ProtoBuf schema type and select the uploaded ProtoBuf schema. (Note: ProtoBuf support is available only in installations using the Apicurio Schema Registry.)

How it helps

ProtoBuf support in Axual empowers both Data Owners and Application Owners by improving data management efficiency, ensuring data integrity, and facilitating seamless integration across diverse IT landscapes. This transformative capability enhances the performance of Kafka-based applications and positions organizations for sustained growth and innovation in the era of real-time data processing.

Updated version of open-source components

In this update, we have listened to your feedback and designed new access control options to streamline operations, enhance security, and empower users across the platform.

Empowering group managers

One significant update is the introduction of Group Managers. This feature allows Tenant Admins to designate individuals who manage group memberships. This change addresses the challenge where all group members traditionally had identical permissions, potentially leading to unauthorized control over critical resources. With Group Managers in place, administrative oversight is refined, ensuring that permissions are granted and revoked strategically, safeguarding the integrity of group resources.

Simplifying Kafka topic access

Another pivotal enhancement enables Data Owners to assign access permissions to groups rather than individual users. This innovation alleviates the burden of manually managing access permissions for multiple users within a group. Moreover, it mitigates the risk of oversight when users leave or join the organization, ensuring that only current group members have access to relevant Kafka topics. This streamlined approach enhances security and operational efficiency within Kafka environments.

Introducing viewer groups for enhanced visibility

The final narrative revolves around introducing Viewer Groups across Streams, Applications, and Environments. This feature empowers Data Owners, Application Owners, and Environment Owners to specify groups authorized to view configurations within their respective domains. Previously, integrating Incident Teams into Resource Owner teams could have been more convenient, hindering timely access to critical configurations. With Viewer Groups, stakeholders can effortlessly manage visibility permissions, ensuring pertinent teams have the insights needed to maintain operational continuity.

Simplify your streaming applications with KSML 1.0.0

KSML has reached the big 1.0.0 milestone! With this release of KSML the language has been declared stable, and contains all the tooling for required for operation. It is possible to monitor your deployment by collecting logs and metrics, and both can be configured to your requirement. This release introduces Helm Charts to allow for easy deployments of KSML applications on Kubernetes.

Check the release notes and the documentation site for more information on how to run KSML.

Introduction to KSML

Kafka Streams provides an easy way for Java developers to write streaming applications. But as powerful as the framework is, the requirement for writing Java code always remained. Even simple operations, like add or removing a field to a Kafka message, require developers to set up new Java projects, create a lot of boilerplate code, set up and maintain build pipelines and manage the application in production.

KSML allows developers to skip most of this work by expressing the desired functionality in YAML. KSML is not intended to be a full replacement of complex Kafka Streams code, or to compete directly with other stream processing frameworks like Flink. It is meant to ease the life of development teams for use cases where simplicity and quick development outweigh the overkill of heavier and more feature-complete frameworks.

One of the main advantages of KSML is that it is fully declarative. This means that common developer responsibilities – like opening and closing connections to Kafka – are handled by the framework. All developers need to worry about is how to transform input messages to output messages. To illustrate this, let’s look at a few common use cases.

Several improvements and bug fixes

In the area of small but practical changes, the following was added.

Supporting KRaft deployments

Significant updates in our platform’s capabilities, including support for KRaft deployments. The latest version of Strimzi now facilitates seamless migration to KRaft, offering enhanced scalability and resilience for Apache Kafka clusters. This update underscores our commitment to providing robust solutions that empower organizations to manage and scale their data infrastructure confidently and efficiently.

Governance improvements

We have reviewed nearly all pages of the Self-Service to identify and resolve several persistent bugs, including:

  • Allowing partial matches when searching for ApplicationID
  • Improving error messages in the Self-Service
  • Adding the schema-version number when downloading a Schema

Additionally, we updated the Apicurio image version used in our Streaming Charts and are now utilizing the official Apicurio public image to its full extent.

In our release notes, you will find other, more minor, updates to our product, which we are continuously improving with your feedback.  Check them out here.

Begin your Kafka journey with Axual

Inspired by what you’ve read? Facing challenges with Kafka in an enterprise environment? We’re here to assist. Here are your next steps:

– Start your free trial

– Request a demo and talk to our experts

– View other blogs, whitepapers, and customer case studies

Download the Whitepaper

Download now
Table name
Lorem ipsum
Lorem ipsum
Lorem ipsum

Answers to your questions about Axual’s All-in-one Kafka Platform

Are you curious about our All-in-one Kafka platform? Dive into our FAQs
for all the details you need, and find the answers to your burning questions.

What benefits does ProtoBuf support offer in this release?

ProtoBuf support allows Data Owners to upload .proto files directly into Axual’s Self-Service interface, making it easier to define the structures of messages within Kafka Topics. This functionality streamlines the serialization and deserialization of Kafka messages for Application Owners, improving data management efficiency and integrity. Overall, ProtoBuf support enhances the robustness of Kafka-based applications and facilitates seamless integration across diverse IT landscapes.

How have access control and user management features been improved in this release?

The latest release introduces Group Managers, allowing Tenant Admins to designate individuals responsible for managing group memberships. This improvement enhances security by ensuring that permissions are granted and revoked strategically. Additionally, Data Owners can now assign access permissions to groups instead of individual users, simplifying access management and reducing the risk of unauthorized access to Kafka topics. The introduction of Viewer Groups further empowers stakeholders to manage visibility permissions effectively.

Rachel van Egmond
Senior content lead

Related blogs

View all
Richard Bosch
November 12, 2024
Understanding Kafka Connect
Understanding Kafka Connect

Apache Kafka has become a central component of modern data architectures, enabling real-time data streaming and integration across distributed systems. Within Kafka’s ecosystem, Kafka Connect plays a crucial role as a powerful framework designed for seamlessly moving data between Kafka and external systems. Kafka Connect provides a standardized, scalable approach to data integration, removing the need for complex custom scripts or applications. For architects, product owners, and senior engineers, Kafka Connect is essential to understand because it simplifies data pipelines and supports low-latency, fault-tolerant data flow across platforms. But what exactly is Kafka Connect, and how can it benefit your architecture?

Apache Kafka
Apache Kafka
Richard Bosch
November 1, 2024
Kafka Topics and Partitions - The building blocks of Real Time Data Streaming
Kafka Topics and Partitions - The building blocks of Real Time Data Streaming

Apache Kafka is a powerful platform for handling real-time data streaming, often used in systems that follow the Publish-Subscribe (Pub-Sub) model. In Pub-Sub, producers send messages (data) that consumers receive, enabling asynchronous communication between services. Kafka’s Pub-Sub model is designed for high throughput, reliability, and scalability, making it a preferred choice for applications needing to process massive volumes of data efficiently. Central to this functionality are topics and partitions—essential elements that organize and distribute messages across Kafka. But what exactly are topics and partitions, and why are they so important?

Event Streaming
Event Streaming
Jimmy Kusters
October 31, 2024
How to use Strimzi Kafka: Opening a Kubernetes shell on a broker pod and listing all topics
How to use Strimzi Kafka: Opening a Kubernetes shell on a broker pod and listing all topics

Strimzi Kafka offers an efficient solution for deploying and managing Apache Kafka on Kubernetes, making it easier to handle Kafka clusters within a Kubernetes environment. In this article, we'll guide you through opening a shell on a Kafka broker pod in Kubernetes and listing all the topics in your Kafka cluster using an SSL-based connection.

Strimzi Kafka
Strimzi Kafka