How to use Strimzi Kafka: Opening a Kubernetes shell on a broker pod and listing all topics
Strimzi Kafka offers an efficient solution for deploying and managing Apache Kafka on Kubernetes, making it easier to handle Kafka clusters within a Kubernetes environment. In this article, we'll guide you through opening a shell on a Kafka broker pod in Kubernetes and listing all the topics in your Kafka cluster using an SSL-based connection.
On this page
Running Apache Kafka on Kubernetes is streamlined with https://strimzi.io/, which simplifies deploying and managing Kafka clusters within Kubernetes.
Prerequisites for Strimzi Kafka
Before we dive in, ensure you have the following:
- Kubernetes Cluster: A running Kubernetes cluster (v1.16 or later).
- Existing Strimzi Kafka Cluster: A Kafka cluster deployed using Strimzi in your Kubernetes cluster.
- Kubectl Installed: The Kubernetes command-line tool, kubectl, installed and configured to interact with your cluster.
- Access Rights: Sufficient permissions to interact with Kubernetes resources.
- Kafka Client Authentication Details: SSL certificates or credentials required to connect securely to your Kafka cluster. This may be automatically generated by the Strimzi Cluster Operator
Step 1: Identify the Kafka Broker Pod
List the pods in the namespace where your Kafka cluster is deployed (we'll assume the namespace is kafka):
kubectl get pods -n kafka
You should see output similar to:
NAME READY STATUS RESTARTS AGE
my-kafka-cluster-kafka-0 1/1 Running 0 2m
my-kafka-cluster-kafka-1 1/1 Running 0 2m
my-kafka-cluster-kafka-2 1/1 Running 0 2m
The broker pods are those with names like my-kafka-cluster-kafka-0, my-kafka-cluster-kafka-1, etc.
Step 2: Open a Shell on a Broker Pod
Choose one of the Kafka broker pods to access. For this example, we'll use my-kafka-cluster-kafka-0.
Execute the following command to open an interactive shell inside the broker pod:
kubectl exec -it my-kafka-cluster-kafka-0 -n kafka -c kafka -- /bin/bash
You are now inside the Kafka broker pod's shell.
Note:
- The -c kafka option makes sure that it won’t first try to connect to the init-container that is also in the pod.
- Using a tool like k9s might make your life a lot easier, by abstracting commands for you and replacing them with a single user interface. You can go to the pods screen, use the vim-like command /
to filter for kafka-
pods, and hit s to open a shell on the highlighted pod.
Step 3: Prepare for SSL-Based Connection
Since we'll connect using an SSL-based protocol, ensure you have access to the necessary SSL certificates within the pod. In Strimzi deployments, certificates are typically stored in specific directories.
List the contents of the Kafka configuration directory to find the keystore and truststore files:
ls /opt/kafka/config
You should see files like:
client.keystore.p12
client.truststore.p12
server.keystore.p12
server.truststore.p12
We will use these keystore and truststore files to establish an SSL connection.
Extracting Keystore and Truststore Passwords
Strimzi stores keystore and truststore passwords in a local property file, which is generated at startup from defined values and environmental variables. Retrieve them using:
grep "9093.ssl.*pass" /tmp/strimzi.properties
Make note of these passwords as they'll be needed in the configuration file later.
Note:9093.ssl.*pass is a regular expression to retrieve the settings for port 9093, which is the default port used by Kafka Broker to provide SSL secured services.
Step 4: List All Kafka Topics Using SSL Connection
Use the Kafka command-line tools to interact with the cluster over SSL. To list all topics securely, follow these steps:
Create a Client Configuration File
First, create a configuration file named /tmp/blogexample.properties with the following content:
security.protocol=SSL
ssl.truststore.location=/opt/kafka/config/client.truststore.p12
ssl.truststore.password=<truststore-password>
ssl.truststore.type=PKCS12
ssl.keystore.location=/opt/kafka/config/client.keystore.p12
ssl.keystore.password=<keystore-password>
ssl.keystore.type=PKCS1
Replace <truststore-password> and <keystore-password> with the passwords you extracted earlier.
Note:
A quicker way achieving the similar result (by using a “single” command) is:
{
grep "9093.ssl" /tmp/strimzi.properties | sed "s/listener.name..*-9093.//"
echo "security.protocol=SSL"
echo "ssl.endpoint.identification.algorithm="
} > /tmp/blogexample.properties
List All Topics
Now, run the following command to list all Kafka topics:
kafka-topics.sh --bootstrap-server localhost:9093 \
--list \
--command-config /tmp/blogexample.properties
Note:
- 9093 is the SSL port for the Kafka broker.
- The --command-config option specifies the client configuration file, which in this case sets options for the use of SSL.
Sample Output
__consumer_offsets
my-topic
Another-topic
Step 5: Describe a Single Kafka Topic
To get detailed information about a specific topic, use the following command:
kafka-topics.sh --bootstrap-server localhost:9093 \
--describe \
--topic <your-topic-name> \
--command-config /tmp/blogexample.properties
Replace <your-topic-name> with the name of the topic you want to describe.
Sample Output
Topic: my-topic PartitionCount: 3 ReplicationFactor: 2 Configs: segment.bytes=1073741824
Topic: my-topic Partition: 0 Leader: 1 Replicas: 1,2 Isr: 1,2
Topic: my-topic Partition: 1 Leader: 2 Replicas: 2,0 Isr: 2,0
Topic: my-topic Partition: 2 Leader: 0 Replicas: 0,1 Isr: 0,1
This output provides partition details, the leader broker, replicas, and in-sync replicas for the specified topic.
Step 6: Exit the Broker Pod Shell
Once you've listed the topics and described a specific topic, exit the shell:
exit
or press Ctrl+D. You're now back in your local terminal.
Conclusion of Strimzi Kafka
By following these steps, you've successfully accessed a Kafka broker pod in a Kubernetes cluster managed by Strimzi and listed all the topics in your Kafka cluster using an SSL-based connection. You've also learned how to describe the details of a single topic. Secure connections are essential in production environments to protect data in transit and ensure that only authorized clients interact with your Kafka cluster.
Strimzi simplifies running Apache Kafka on Kubernetes, providing robust security features like SSL out of the box. Leveraging these features ensures your messaging system is scalable, manageable, and secure. Here you can do a deep dive into why Axual is using Strimzi Kafka.
References
- Apache Kafka Security Documentation
- K9s CLI
Download the Whitepaper
Download nowAnswers to your questions about Axual’s All-in-one Kafka Platform
Are you curious about our All-in-one Kafka platform? Dive into our FAQs
for all the details you need, and find the answers to your burning questions.
Strimzi makes it easy to run an Apache Kafka cluster on Kubernetes in different ways. For development, you can quickly set up a cluster in Minikube in just a few minutes.
To open a shell on a Kafka broker pod in a Kubernetes cluster managed by Strimzi, first identify the broker pod by listing the pods in your Kafka namespace with kubectl get pods -n kafka. Once you locate a broker pod, use the kubectl exec command to access it. For example, run kubectl exec -it my-kafka-cluster-kafka-0 -n kafka -c kafka -- /bin/bash to open an interactive shell. This lets you interact directly with the Kafka broker, where you can configure settings, access logs, or troubleshoot.
To list all topics over an SSL connection, first ensure you have access to the necessary SSL certificates on the broker pod (such as client.truststore.p12 and client.keystore.p12). Then, create a configuration file (e.g., /tmp/blogexample.properties) with SSL settings, including the truststore and keystore paths and passwords. Once configured, use the command kafka-topics.sh --bootstrap-server localhost:9093 --list --command-config /tmp/blogexample.properties to retrieve a list of all topics in your Kafka cluster securely.
Related blogs
Apache Kafka is a powerful platform for handling real-time data streaming, often used in systems that follow the Publish-Subscribe (Pub-Sub) model. In Pub-Sub, producers send messages (data) that consumers receive, enabling asynchronous communication between services. Kafka’s Pub-Sub model is designed for high throughput, reliability, and scalability, making it a preferred choice for applications needing to process massive volumes of data efficiently. Central to this functionality are topics and partitions—essential elements that organize and distribute messages across Kafka. But what exactly are topics and partitions, and why are they so important?
Kafka Operators for Kubernetes makes deploying and managing Kafka clusters simpler and more reliable. In this blog, we will do a deep dive into what a Kafka operator is and why you should use a Kafka operator.