March 29, 2022

Release blog 2022.1 – the spring release

Spring has just come around the corner. Not only does it bring pleasant temperatures, green leaves and flowers, but also a new and shiny release of Axual Platform: release 2022.1.

On this page

Spring has just come around the corner.  Not only does it bring pleasant temperatures, green leaves and flowers, but also a new and shiny release of Axual Platform: release 2022.1.

Since our last release we have worked on many improvements in all layers of the platform. In this article we are going through them one by one.

Setting topic properties

As a producer, when creating a topic for your streaming use case, there are a couple of things you need to determine when creating your topic. How long do you want to keep messages available on your topic, in other words, what is the retention time?  And, how many consumer instances do you want to support reading from the topic at the same time, or: how many partitions does the topic need?
There are more properties that need to be set for a topic, but you don’t necessarily want to worry about too much; they should just be set to appropriate values automatically. Examples are replication.factor: how many replicas should exist of every message and min.insync.replicas: the number of replicas that must acknowledge a write for the write to be considered successful. Settings for those parameters are rarely different between 2 topics on the same cluster and for that reason we are automatically setting values for those topic parameters.
In case you do need to add advanced stream properties, as of 2022.1 we have added support to set the additional properties for users with the tenant admin role.

SASL support

Security by design, that is what Axual is known for: no application, whether it is a producer or consumer, gets unauthorized access to any topic. Authentication is implemented throughout the platform by means of Mutual TLS and applications are authorized by the stream owner to produce and/or consume from a topic.

SSL or Mutual TLS are widely used and applied for authentication and authorization purposes, but no matter how well known the technology already is, it comes with serious disadvantages. First of all: certificates bring an operational burden to developers and operators. Certificates need to be requested, usually with a company provided PKI. This brings waiting times for the developer who just wants to securely connect his producer or consumer app to a Kafka topic. Because this process is cumbersome, it is likely that application owners will start reusing certificates for different applications, which increases the risk that unintentionally the wrong application gets access to a topic. Secondly, certificates have a limited validity. This means that as soon as certificates expire, potentially mission critical Kafka powered applications will start to fail. Of course as a DevOps team you can set up processes to avoid missing an expiry date, but again that does not take away the operational burden.

In short: SSL or Mutual TLS are far from ideal. And since our motto is “Streaming Made Simple” we would love to help DevOps teams to apply Kafka topic security in an easier way. That’s why with great pride we are bringing SASL support to our platform.

SASL support is implemented on multiple levels in the platform, from the Kafka Broker to the Self-Service UI. As soon as the platform operators or tenant admins have enabled SASL support on the broker, they configure it as an available authentication method in the Self-Service UI. Then, they can gradually add support for the instances by marking SASL as an enabled authentication method.

Because of the SASL implementation, the perspective of the application owner has changed a  little. We have separated the configuration and authentication settings in separate modals with buttons and made it clear for end users when configurations or authentication information is missing. When the application owner opens the modal, SASL credentials  can be generated by simply pressing a button.

In the following short video a short introduction in the platform changes for operators and application owners are shown and you will see how an application owner generates SASL credentials and uses it in an example Kafka producer and consumer application.

I hear you thinking: but with SSL also comes payload encryption, is that lost when I switch to SASL? Short answer: no.  SSL is still used in the connection to Kafka. This means that network traffic to and from Kafka is still encrypted. It is just no longer used for the authentication of applications nor for authorizing topic access.


Please use the following links for more information on how to start using SASL as an operator or developer:

Multi CPU architecture support

The introduction of mobile phones, specifically smartphones, has created the need to combine processing power with a low energy consumption. Existing CPU architectures such as x86 were not suitable for the development of mobile phones because of the high energy consumption. ARM CPUs, unlike x86, were able to combine a low power consumption with high processing power. For that reason, mobile phone manufacturers massively turned to ARM as the architecture for their CPUs. As of the end of 2020, the adoption of ARM CPUs can also be seen in (virtual) servers and laptops, e.g. in the latest Apple MacBook M1 models. Obviously this also has a huge impact on software vendors and providers of SaaS services. A lower power consumption is not only beneficial for mobile devices. In a time where energy prices are surging, it is becoming increasingly important to look at the efficiency of the underlying infrastructure. For that reason Axual has invested in the capability to run the platform on other, non x86 CPU architectures, such as ARM.

For platform operators this means that in time they can migrate from x86 nodes backing the kubernetes cluster to ARM nodes.  For application developers this feature means that from now on,  Axual Platform can be run on non x86 CPU architectures.

Platform improvements under the hood:

Cluster API – no more zookeeper
One of our core platform components is Cluster API. It is used by other platform components to apply changes to Kafka, such as topic creations, topic configurations and applying ACLs. Before 2022.1, Cluster API was using zookeeper to store its state. As Kafka is moving away from Zookeeper as a dependency, we have decided to also delete this dependency from Cluster API. Cluster API state is now stored on a dedicated Kafka topic which is automatically created upon startup.

Strimzi upgrade
Axual Platform is based on the Strimzi Operator. For the 2022.1 release we have upgraded to Strimzi 0.27.1 which also includes  Multi CPU architecture support. More importantly, it allows for an upgrade to the 3.0 release of Apache Kafka. Instructions for operators can be found here.

Vault namespace support
With the introduction of Self-Service connector management there was a need to store Connector secrets, such as the Private Key in a secure location. We have opted for Hashicorp Vault and introduced support for it both in Management API and Connect. For Connect, we have even created and open sourced the Vault Configuration Provider, which is fully Kafka Connect compatible. For enterprise installations however it is common to use Vault Namespaces to organize secrets. As of release 2022.1 we have added support to configure Vault Namespaces  to both Management API and Connect.

Other improvements
As with every product release  we have upgraded many dependencies in the platform components and fixed minor bugs that were affecting the operator or developer experience. Lastly, we have improved the Getting Started section to better guide the user through their initial steps on our platform. You can expect many more initiatives on the onboarding experience in future releases.

For now, I can only invite you to try out the new platform features by requesting a trial or asking your stream team to upgrade your Axual Platform installation as soon as possible :-).


Happy streaming!

The Axual Team

Download the Whitepaper

Download now
Table name
Lorem ipsum
Lorem ipsum
Lorem ipsum

Answers to your questions about Axual’s All-in-one Kafka Platform

Are you curious about our All-in-one Kafka platform? Dive into our FAQs
for all the details you need, and find the answers to your burning questions.

Joris Meijer
Security Officer, Customer Success

Related blogs

View all
Richard Bosch
November 12, 2024
Understanding Kafka Connect
Understanding Kafka Connect

Apache Kafka has become a central component of modern data architectures, enabling real-time data streaming and integration across distributed systems. Within Kafka’s ecosystem, Kafka Connect plays a crucial role as a powerful framework designed for seamlessly moving data between Kafka and external systems. Kafka Connect provides a standardized, scalable approach to data integration, removing the need for complex custom scripts or applications. For architects, product owners, and senior engineers, Kafka Connect is essential to understand because it simplifies data pipelines and supports low-latency, fault-tolerant data flow across platforms. But what exactly is Kafka Connect, and how can it benefit your architecture?

Apache Kafka
Apache Kafka
Richard Bosch
November 1, 2024
Kafka Topics and Partitions - The building blocks of Real Time Data Streaming
Kafka Topics and Partitions - The building blocks of Real Time Data Streaming

Apache Kafka is a powerful platform for handling real-time data streaming, often used in systems that follow the Publish-Subscribe (Pub-Sub) model. In Pub-Sub, producers send messages (data) that consumers receive, enabling asynchronous communication between services. Kafka’s Pub-Sub model is designed for high throughput, reliability, and scalability, making it a preferred choice for applications needing to process massive volumes of data efficiently. Central to this functionality are topics and partitions—essential elements that organize and distribute messages across Kafka. But what exactly are topics and partitions, and why are they so important?

Event Streaming
Event Streaming
Jimmy Kusters
October 31, 2024
How to use Strimzi Kafka: Opening a Kubernetes shell on a broker pod and listing all topics
How to use Strimzi Kafka: Opening a Kubernetes shell on a broker pod and listing all topics

Strimzi Kafka offers an efficient solution for deploying and managing Apache Kafka on Kubernetes, making it easier to handle Kafka clusters within a Kubernetes environment. In this article, we'll guide you through opening a shell on a Kafka broker pod in Kubernetes and listing all the topics in your Kafka cluster using an SSL-based connection.

Strimzi Kafka
Strimzi Kafka