On this page
Since our last release we have worked on many improvements in all layers of the platform. In this article we are going through them one by one.
Up-to-date platform components
We have been updating a few dependencies through our platform for the last couple of months. The biggest two of them were the upgrade of Strimzi and Keycloak. Axual Platform 2022.2 brings Keycloak 18 to users of our platform.
Strimzi upgrade
Axual Platform is based on the Strimzi Operator. For the 2022.2 release we have upgraded to Strimzi 0.29.0. It allows for an upgrade to the 3.2.0 release of Apache Kafka. The operator instructions contain upgrade steps related to Axual Operator 0.6.3 and 0.7.0 to upgrade to Kafka 3.0.0 and Kafka 3.2.0 respectively. You can find the operations documentation here.
Application/stream owner insights
Nowadays, businesses more than ever need to interact in real-time with their customers and partners. As you all know, (event) streaming has proven to be an excellent way to facilitate fast and seamless communication between applications. As applications rely more on a streaming data backbone, it becomes increasingly important to guarantee its availability. This brings a responsibility to platform operators, who instrument the platform to continuously monitor and log platform events and make sure they are alerted appropriately.
The responsibility, however, does not stop with the platform operators. DevOps engineers are responsible for their applications from DEV to PROD. They are used to instrument their applications as the operators do for the platform. Specifically looking at Kafka application developers, among other options, they can instrument their consumer/producer or streams applications by enabling JMX and scrape the application’s monitoring interfaces, e.g. by adding a JMX exporter to it and configure Prometheus to scrape it.
But, that is not really enough for producer/consumer applications interacting with a streaming platform. If you depend on a topic either because you produce to it or you depend on its data for your use case, you want to know how that topic is doing from the perspective of the platform. E.g, how many messages per second are produced to a topic? How many messages in total are there on my topic? This introduces a challenge to the platform team who does not necessarily want to expose in full detail all Kafka metrics and to offer and maintain this “observability interface” for teams connecting to the streaming platform.
This is where our newest kid on the block, Metrics Exposer, comes in. Metrics Exposer is an extension of Axual Platform which offers a REST API which can be queried by application and stream owners who want to get more insights into how their stream (topic) or application is doing.
Available metrics
At the introduction of this new API, we are offering two metrics and we aim to continuously expand the API with metrics which bring the most value to users of the API. The following two metrics will be supported:
- Message Rate: the number of messages per second on a topic
- Partition Size: the number of messages on a topic partition
OpenAPI specification, authentication
We have released an OpenAPI specification to help developers understand what the API is offering and to speed up the development of a client or configure any observability tool which understands REST. As can be read in the public API documentation, Metrics Exposer API is secured by the OAuth2 protocol. All requests to the API should provide a valid JWT token via the Authorization header.
Operator instructions
Instructions for operators on how to make this API available to end users have been provided as part of the “Upgrading to 2022.2” docs here.
Beta state
It is important to know that the API offered currently is in a beta state and is likely to change based on feedback from users and as metrics and functionality is introduced. We will always communicate any (breaking) change in the changelog of the API documentation.
Sneak peek: revamped UI
We were getting a bit used too much to the current UI look and feel, and we think it is time to freshen it up a bit. That’s why we have started revamping the Self-Service UI for Kafka which not only offers you a pleasant look and feel, but we will introduce a lot of usability improvements at the same time. The aim is to introduce this revamped UI in the 2022.3 release, for now we are just sharing a sneak preview in the form of a couple of screenshots.
Other improvements
As with every product release we have upgraded many dependencies in the platform components and fixed minor bugs that were affecting the operator or developer experience. Lastly, we have improved the Getting Started section to better guide the user through their initial steps on our platform. You can expect many more initiatives on the onboarding experience in future releases.
For now, I can only invite you to try out the new platform features by requesting a trial or asking your stream team to upgrade your Axual Platform installation as soon as possible :-).
Happy streaming!
The Axual Team
Download the Whitepaper
Download nowAnswers to your questions about Axual’s All-in-one Kafka Platform
Are you curious about our All-in-one Kafka platform? Dive into our FAQs
for all the details you need, and find the answers to your burning questions.
Related blogs
Apache Kafka has become a central component of modern data architectures, enabling real-time data streaming and integration across distributed systems. Within Kafka’s ecosystem, Kafka Connect plays a crucial role as a powerful framework designed for seamlessly moving data between Kafka and external systems. Kafka Connect provides a standardized, scalable approach to data integration, removing the need for complex custom scripts or applications. For architects, product owners, and senior engineers, Kafka Connect is essential to understand because it simplifies data pipelines and supports low-latency, fault-tolerant data flow across platforms. But what exactly is Kafka Connect, and how can it benefit your architecture?
Apache Kafka is a powerful platform for handling real-time data streaming, often used in systems that follow the Publish-Subscribe (Pub-Sub) model. In Pub-Sub, producers send messages (data) that consumers receive, enabling asynchronous communication between services. Kafka’s Pub-Sub model is designed for high throughput, reliability, and scalability, making it a preferred choice for applications needing to process massive volumes of data efficiently. Central to this functionality are topics and partitions—essential elements that organize and distribute messages across Kafka. But what exactly are topics and partitions, and why are they so important?
Strimzi Kafka offers an efficient solution for deploying and managing Apache Kafka on Kubernetes, making it easier to handle Kafka clusters within a Kubernetes environment. In this article, we'll guide you through opening a shell on a Kafka broker pod in Kubernetes and listing all the topics in your Kafka cluster using an SSL-based connection.