On this page
As the festive season draws to a close and we turn our attention to new beginnings and resolutions, we’re excited to unveil the Winter Release of Axual Platform 2024.4.
Winter brings a time for celebration, renewal, and reflection, and we’ve channeled this spirit into our latest update. With a dedicated focus on enhancing your experience, we’ve ensured the platform continues to evolve, meeting your needs while staying ahead of the ever-changing data landscape. This release incorporates valuable user feedback from the introduction of Data Masking in Self-Service to an enhanced Kafka Streams experience. It offers a more intuitive, efficient, and future-ready platform.
Whether you're new to Kafka or an experienced user looking to optimize your workflows, this update empowers you with greater control, deeper insights, and improved performance. Let’s explore the key highlights of the 2024.4 release and see how we’re setting you up for success in the next season of data challenges.
In this release blog, we cover the following topics:
- Data Masking in Self-Service
- This feature allows the Schema and Topic Owners to decide which fields of a Schema will be masked when performing a Topic Browse search.
- The purpose is to be sure that certain information available on the Kafka topic is never shown to the User
- Useful for Topic Owners wanting to be sure that certain information will never be visible to the User
- Better Kafka Streams Experience through Self-Service
- This feature reduces the manual steps an Application Owner was required to do in the Self-Service, like manually creating intermediate topics, to run a Kafka Streams Application
- The purpose is to reduce the time an Application Owner spends on the Self-Service creating intermediate topics and make the Application Owner’s Kafka Streams Application able to create those intermediate topics by itself.
- Useful for Application Owners wanting to experiment with Kafka Streams without having to pre-create many intermediate topics
- Reset Consumer Application Offset
- This feature allows the Application Owners to reset their Application’s Consumer Group Offset to either LATEST or EARLIEST; this means that a Consumer can skip to the latest available message in a Kafka topic or rewind to the earliest available message in a Kafka topic.
- The purpose is to control where a Consumer should read an existing Kafka Topic; this is crucial if a new application is created to consume an existing Kafka Topic with millions of messages, but only the latest is essential.
Data Masking in Self-Service
This feature allows Schema and Topic Owners to define which fields of a Schema should be masked when performing a Topic Browse search on Kafka. It enables the selective hiding of sensitive or private data within Kafka topics.
What is its purpose?
The purpose of this feature is to ensure that certain sensitive information stored in Kafka topics is never exposed to users who browse the topic. By masking specific fields, organizations can protect sensitive data while still allowing users to interact with the rest of the topic content.
Who is this useful for?
This feature is particularly useful for Topic Owners and Schema Owners responsible for managing the data within Kafka topics. It ensures they can enforce data security policies by controlling which information is visible to users.
Explanation in more detail
In many cases, Kafka topics may contain sensitive information, such as personally identifiable data, financial details, or other confidential content, that should not be exposed to all users. With this feature, Topic and Schema Owners can configure data masking rules for specific fields within a schema. When users perform a Topic Browse search, any field marked for masking will be hidden, ensuring that users only have access to non-sensitive data. This feature enhances privacy and security while maintaining Kafka's flexibility and usability for broader data management and analysis purposes.
Here you can find the public docs for the Data Masking operation
Better Kafka Streams Experience through Self-Service
This feature enhances the Kafka Streams experience by allowing the Kafka Streams application to automatically create intermediate topics through Self-Service, eliminating the need to manually create intermediate topics by the Application Owner.
What is its purpose?
The purpose of this feature is to simplify and streamline the process for Application Owners working with Kafka Streams. It reduces the time and effort spent manually creating intermediate topics, allowing the application to handle this task automatically making it easier and faster to deploy and manage Kafka Streams applications.
Who is this useful for?
This feature is handy for Application Owners who work with Kafka Streams and must manage intermediate topics for their data processing pipelines. It is particularly beneficial for users who want to experiment or prototype with Kafka Streams without needing to pre-create multiple intermediate topics.
Explanation in more detail
Kafka Streams applications often require intermediate topics to process data through different stages of a stream processing pipeline. Traditionally, Application Owners had to manually create these intermediate topics, which added complexity and extra steps to the setup process. With this new feature, Kafka Streams applications are able to automatically create the necessary intermediate topics during their execution. This eliminates the need for manual topic creation and reduces the overhead for Application Owners. The feature is especially useful for experimentation and quick iterations, as users no longer need to pre-configure topics before running their applications. It streamlines the workflow, enhances productivity, and accelerates the deployment of Kafka Streams applications, making it easier for Application Owners to focus on data processing without worrying about the infrastructure setup.
Here you can find the public docs for the Streaming Application
Reset Consumer Application Offset
The "Reset Consumer Application Offset" feature allows Application Owners to reset the Consumer Group Offset for their Kafka application to either the LATEST or EARLIEST position. This enables a consumer to either skip to the latest available message in a Kafka topic or rewind to the earliest available message.
What is its purpose?
The purpose of this feature is to give application owners control over where a consumer should begin reading within an existing Kafka topic. This is particularly important when setting up new applications that need to consume from topics with large message histories but only need the most recent messages (or need to process all available data from the beginning).
Who is this useful for?
This feature is useful for Application Owners and developers working with Kafka consumers who need to interact with topics containing large amounts of historical data. It is especially beneficial when new applications need to consume a Kafka topic with millions of messages, but only the latest messages are relevant for processing.
Explanation in more detail
When consuming messages from a Kafka topic, Kafka maintains the offset for each consumer group, determining the point at which consumption resumes after a consumer restarts or recovers. In scenarios where an application is newly created to consume from a Kafka topic with millions of messages, it might not be necessary to process all historical messages, especially if only the most recent data is relevant.
This feature allows Application Owners to reset the Consumer Group Offset to the LATEST position, so the consumer immediately starts consuming new messages, bypassing older ones. Alternatively, if there’s a need to reprocess or analyze historical data, they can reset the offset to EARLIEST to start consuming from the beginning of the topic.
This capability is crucial for optimizing performance and ensuring that only the necessary data is consumed, saving resources and reducing unnecessary processing overhead. It also ensures flexibility in how consumers interact with topics based on changing application requirements or use cases.
General Improvements
In our release notes, you will find other, more minor, updates to our product, which we are continuously improving with your feedback.
Begin your Kafka journey with Axual
Inspired by what you've read? Facing challenges with Kafka in an enterprise environment? We're here to assist. Here are your next steps:
Download the Use Case
Download for free; no credentials are neededAnswers to your questions about Axual’s All-in-one Kafka Platform
Are you curious about our All-in-one Kafka platform? Dive into our FAQs
for all the details you need, and find the answers to your burning questions.
The Data Masking feature ensures that sensitive information stored in Kafka topics is not exposed to users browsing the topic. Schema and Topic Owners can define which fields should be masked, improving data privacy and security while maintaining usability for non-sensitive data interactions.
The improved Kafka Streams feature allows applications to automatically create intermediate topics, reducing manual steps previously required by Application Owners. This simplifies the setup process, accelerates experimentation, and streamlines the deployment of Kafka Streams applications.
This feature enables Application Owners to reset their Consumer Group Offset to either LATEST or EARLIEST. It allows consumers to skip to the newest available messages or rewind to the beginning of a Kafka topic, offering flexibility for processing relevant data while optimizing performance.
Related blogs
Kafka migration becomes effortless with Axual Distributor. Simplify data flow, synchronize schemas, and ensure seamless transitions between clusters with automated and secure tools.
Uncover the often-overlooked costs of Apache Kafka implementation. Learn how factors like infrastructure and operational demands can impact your budget and decision-making
This blog is your go-to guide for understanding event streaming. Discover how it works, why it matters, and how businesses leverage real-time data insights to stay ahead. From real-world applications in industries like finance and healthcare to tools like Apache Kafka.