October 13, 2022

Release blog 2022.3 – the autumn release

Revamped UI, schema explorer, Terraform provider, and more - the autumn release is here! Read more about all the exciting stuff that we've shipped in this release.

On this page

Axual Platform 2022.3, the autumn release, marks a big step in the development of our platform. As a preview, we are introducing a new way of managing Kafka topics, through our new Terraform Provider. But, most importantly,  we are releasing a huge upgrade to our Kafka Self-Service interface. Read all about it in this blog.

Revamped UI

Axual was one of the first companies ever to provide an intuitive Self-Service interface for Apache Kafka, with the goals to

  • Make streaming simple: simplify functional management on Kafka, such as topic creation, configuration, etcetera
  • Bring Kafka under governance: make sure there is not a single Kafka topic which is unknown or under the ownership of someone in the organization, bring teams together to collaborate on the same streaming platform

In the past years we have been introducing many new features in our Self-Service UI for Kafka, such as support for Kafka Connect and new authentication mechanisms. But sometimes it is a good idea to take a couple of steps back and use the insights you gathered to re-engineer a product. This is precisely what we have done. In the past months we have completely rebuilt and revamped our user interface on Kafka applying the most modern technologies and increasing the user friendliness along the way. Many small and bigger improvements have made their way into the 2022.3 release.

Schema explorer

We are introducing a new way to browse through all available Avro schemas: schema explorer. Using the schema explorer you can easily click through the namespaces and view and download schemas. In upcoming releases we are going to add support for other schema types and provide more insights into which schemas are used where.

Connector configuration property grouping

If you are dealing with (Kafka) connectors, you might know that you usually have to provide a lot of configurations before you can start your connector. We have made that a bit easier by showing the connector property groups in the user interface.

Stream browse controls

Browsing or searching for messages is an easy way for Kafka developers to see how their producer is doing or to discover what messages really look like on a particular stream. We have renamed “Browse & Search” to “Messages” and included better controls to filter and search for messages.

The new UI is added to the 2022.3 under a preview label. While we have extensively tested the UI, we acknowledge the fact that we are human and as a consequence bugs have ended up in the software. That is why we use the 4th quarter to collect your feedback before we formally declare the new UI as the default.
We hope you like the upgrade to our user interface and we invite you to try it out and report any feedback through the new “Add feedback” button in the left menu.

GitOps for Apache Kafka: Terraform Provider for Axual

Where applying version control was a second nature to software engineers, it has taken some time before this was done in a similar way by operators. Fast forward to 2022, no one has to explain what “infrastructure as code” means. Mature frameworks like Ansible, Puppet and Terraform are assisting tech teams around the world with their IaC challenges.

As the code or configuration is usually stored in Git repositories, features core to the version control system can be used to have auditing, quality control and governance over everything that happens with what’s stored in the repository. But, this doesn’t necessarily have to be limited to only code and configuration. Any infrastructure running in production can be put under version control and leveraged for continuous deployment to make sure production is always in sync with what’s stored in the repository: GitOps.

When translating this concept to Kafka, infrastructure is formed by schemas, topics and ACLs set on those topics. For example, as a team producing data to a topic, you usually own the topic and the configuration it needs.

Of course, you can perform this configuration in an imperative way using our Self-Service interface or via our API.
Existing Kafka GitOps tools work directly on Kafka, bypassing any governance or control layer that was put in place by the operators, which is unacceptable for enterprise organizations.  As of Axual Platform 2022.3, if you want to declare your topics and their topic configurations you can do so with the Terraform Provider for Axual.

In the following short video you see a demo on how this Terraform provider can be used to ….

  1. Define topic ownership
  2. Create topics
  3. Define application ownership
  4. Create applications
  5. Configure application certificates
  6. Perform topic configurations
  7. Authorize an application to produce/consume from a topic

Give our Terraform Provider a test drive by requesting a SaaS trial and following the Getting Started in the documentation.

GitOps for Apache Kafka – Terraform provider for Axual

Please note that our new Terraform Provider currently only supports environments with “Auto approval” enabled and Avro support is not added yet.

More metrics please!

In the 2022.2 release, we introduced users of our platform to more insights into how their applications and streams are doing through our Metrics API. In this release, we added support for many more metrics, such as:

Consumer lag in records: how many records is my application behind on a topic?
Consumer time to catch up: how much time does my application need to catch up reading from the topic?
Partition size in bytes and Topic size in bytes: how much storage is used by my partitions and topics?
Partition size in records and Topic size in records: how many messages are there on my topic? How is the data distributed over my topic?
Incoming byte rate per topic and outgoing byte rate per topic: how much data is produced to / consumed from my topic?

In addition we have updated our API documentation with meaningful examples, including on how to query the API using different programming languages.

Strimzi upgrade

Axual Platform is based on the Strimzi Operator. For the 2022.3 release we have upgraded to Strimzi 0.31.1 . It allows for an upgrade to the 3.2.3 release of Apache Kafka, addressing a well known vulnerability recently discovered in Kafka: CVE-2022-34917. The operator instructions contain upgrade steps related to Axual Operator 0.12.0 to upgrade to Kafka 3.2.3. You can find the operations documentation here.

That’s it, folks! If you would like to experience Axual yourself, feel free to request a free, 30-day trial right here. Or, if you would like to get a demo and discuss your specific use case, please get in touch with our experts here.

Download the Whitepaper

Download now
Table name
Lorem ipsum
Lorem ipsum
Lorem ipsum

Answers to your questions about Axual’s All-in-one Kafka Platform

Are you curious about our All-in-one Kafka platform? Dive into our FAQs
for all the details you need, and find the answers to your burning questions.

Joris Meijer
Security Officer, Customer Success

Related blogs

View all
Richard Bosch
November 12, 2024
Understanding Kafka Connect
Understanding Kafka Connect

Apache Kafka has become a central component of modern data architectures, enabling real-time data streaming and integration across distributed systems. Within Kafka’s ecosystem, Kafka Connect plays a crucial role as a powerful framework designed for seamlessly moving data between Kafka and external systems. Kafka Connect provides a standardized, scalable approach to data integration, removing the need for complex custom scripts or applications. For architects, product owners, and senior engineers, Kafka Connect is essential to understand because it simplifies data pipelines and supports low-latency, fault-tolerant data flow across platforms. But what exactly is Kafka Connect, and how can it benefit your architecture?

Apache Kafka
Apache Kafka
Richard Bosch
November 1, 2024
Kafka Topics and Partitions - The building blocks of Real Time Data Streaming
Kafka Topics and Partitions - The building blocks of Real Time Data Streaming

Apache Kafka is a powerful platform for handling real-time data streaming, often used in systems that follow the Publish-Subscribe (Pub-Sub) model. In Pub-Sub, producers send messages (data) that consumers receive, enabling asynchronous communication between services. Kafka’s Pub-Sub model is designed for high throughput, reliability, and scalability, making it a preferred choice for applications needing to process massive volumes of data efficiently. Central to this functionality are topics and partitions—essential elements that organize and distribute messages across Kafka. But what exactly are topics and partitions, and why are they so important?

Event Streaming
Event Streaming
Jimmy Kusters
October 31, 2024
How to use Strimzi Kafka: Opening a Kubernetes shell on a broker pod and listing all topics
How to use Strimzi Kafka: Opening a Kubernetes shell on a broker pod and listing all topics

Strimzi Kafka offers an efficient solution for deploying and managing Apache Kafka on Kubernetes, making it easier to handle Kafka clusters within a Kubernetes environment. In this article, we'll guide you through opening a shell on a Kafka broker pod in Kubernetes and listing all the topics in your Kafka cluster using an SSL-based connection.

Strimzi Kafka
Strimzi Kafka