Apache Kafka

How to Run the Apache Kafka without the ZooKeeper

Apache Kafka ZooKeeper, commonly known as ZooKeeper, is a distributed coordination service that is used by the Kafka servers to manage and coordinate the configuration, synchronization, and communication between Kafka brokers, producers, and consumers.

It provides a reliable and fault-tolerant interface for the Kafka server to store and manage its metadata. This includes information such as the available brokers, partitions for the available topics, the location of the replicas, and more.

The role of ZooKeeper is to ensure that this information is consistent and up-to-date on all Kafka nodes, even in the event of failure.

How Does ZooKeeper Works?

ZooKeeper stores its data in a hierarchical namespace called the Znode tree where each node can contain the data and be monitored for changes.

Once the ZooKeeper server receives a write request, it forwards it to the quorum of other servers which must agree on the write operation before it’s committed. This server agreement ensures that the update is atomic and consistent across all nodes.

Kafka uses ZooKeeper for several purposes such as:

Electing a leader for each partition: ZooKeeper helps Kafka to maintain the list of available brokers and their status. When a broker fails or joins the cluster, ZooKeeper notifies Kafka, which then reassigns the partitions to the new leaders.

Managing group coordination for consumers: Kafka uses ZooKeeper to address the state and ownership of consumer groups which allows multiple consumers to work together to consume a topic. ZooKeeper tracks which partition consumers consume and notifies them of any changes.

Storing and managing the offsets: Kafka uses ZooKeeper to store and manage the offsets for each consumer group representing the consumer’s position within each partition. This allows the consumers to resume from where they left off after a failure or a restart.

However, despite its critical role in the functionality of Kafka, we can start the Kafka server without the ZooKeeper. This can be useful when troubleshooting or updating.

Let us discuss how we can run Kafka without the ZooKeeper.

Requirements:

  1. Access to Terminal
  2. Sudo permissions
  3. JDK 11 and above

Step 1: Install Apache Kafka

Download the latest version of Kafka from its official downloads page: https://kafka.apache.org/downloads

Once the download is complete, extract the contents to the directory of your choice. For example:

/opt/kafka

Navigate into the extracted Kafka directory:

$ cd /opt/kafka

Step 2: Generate a New Cluster ID

Start by generating a new ID for your Kafka cluster with the following command:

./bin/kafka-storage.sh random-uuid

The command should return a new UUID value as:

CJUOWCXZSi2L5k-aQKHn6A

Step 3: Format the Storage Directory

The next step is to format the storage directory with the UUID value from the previous step. Run the following command:

./bin/kafka-storage.sh format -t <uuid> -c ./config/kraft/server.properties

An example command is as follows:

./bin/kafka-storage.sh format -t CJUOWCXZSi2L5k-aQKHn6A -c ./config/kraft/server.properties

Output:

Formatting /tmp/kraft-combined-logs with metadata.version 3.4-IV0.

Step 4: Lauch the Broker

Once completed, you can run the broker as a daemon as shown in the following command:

./bin/kafka-server-start.sh ./config/kraft/server.properties

The command should start the Kafka broker without the need for the ZooKeeper.

To close the broker, use CTRL + C or close the terminal window.

Conclusion

You now learned how to use the Kafka KRAFT mode to configure and run the Apache Kafka as a daemon without needing the ZooKeeper. It is good to keep in mind that although Kafka can run without the ZooKeeper, this is highly discouraged.

ZooKeeper is a critical component of Kafka; without it, it can lead to significant issues that can even lead to data loss.

About the author

John Otieno

My name is John and am a fellow geek like you. I am passionate about all things computers from Hardware, Operating systems to Programming. My dream is to share my knowledge with the world and help out fellow geeks. Follow my content by subscribing to LinuxHint mailing list