FedoraRHEL Based

How To Install Apache Kafka on Fedora 41

Install Apache Kafka on Fedora 41

In this tutorial, we will show you how to install Apache Kafka on Fedora 41. Apache Kafka is a powerful open-source distributed event streaming platform widely used for building real-time data pipelines and streaming applications. It is designed to handle high throughput and fault tolerance, making it an essential tool for modern data architectures. This guide will walk you through the process of installing Apache Kafka on Fedora 41, ensuring you have a robust setup for your event-driven applications.

Prerequisites

Before diving into the installation process, ensure that your system meets the following prerequisites:

  • Operating System: Fedora 41 installed and updated.
  • Hardware Requirements: Minimum requirements include a multi-core CPU, at least 4 GB of RAM, and sufficient storage (10 GB recommended).
  • User Permissions: Access to a non-root user with sudo privileges.
  • Internet Access: Required for downloading necessary packages.
  • Terminal Emulator: Installed for command-line operations.

Step 1: Update the Fedora System

Keeping your system updated is crucial for security and compatibility. Start by updating your Fedora system to ensure all packages are current. Open your terminal and execute the following commands:

sudo dnf clean all
sudo dnf update

This process will refresh your package manager’s cache and install any available updates.

Step 2: Install Java Runtime Environment (JRE)

Apache Kafka is built on Java, so you need to install the Java Runtime Environment (JRE) before proceeding. The recommended version is OpenJDK 17. To install it, run the following command in your terminal:

sudo dnf install java-17-openjdk.x86_64

After installation, verify that Java is correctly installed by checking its version:

java --version

You should see output indicating that OpenJDK 17 is installed. If you need to install an alternative version of Java, ensure compatibility with Kafka by referring to the official documentation.

Step 3: Download Apache Kafka

The next step is to download the latest version of Apache Kafka. Visit the official Apache Kafka download page to find the most recent release. Alternatively, you can use `wget` to download it directly from your terminal. Replace `<VERSION>` and `<SCALA_VERSION>` with the appropriate values:

wget https://downloads.apache.org/kafka/<VERSION>/kafka_<SCALA_VERSION>-<VERSION>.tgz

Once downloaded, verify the integrity of the file using SHA512 checksum to ensure it has not been tampered with. You can find checksum values on the download page.

Step 4: Extract and Move Kafka Files

After downloading Kafka, extract the tarball using the following command:

tar -xzf kafka_<SCALA_VERSION>-<VERSION>.tgz

This will create a new directory containing all Kafka files. For better organization, move this directory to `/opt`:

sudo mv kafka_<SCALA_VERSION>-<VERSION> /opt/kafka

You should also change ownership of this directory to your current user for easier access:

sudo chown -R $USER:$USER /opt/kafka

Step 5: Configure Environment Variables

Setting up environment variables makes it easier to run Kafka commands from any terminal session. To do this, edit the `/etc/environment` file as follows:

sudo nano /etc/environment

Add these lines at the end of the file:

KAFKA_HOME=/opt/kafka
PATH=$PATH:$KAFKA_HOME/bin

This configuration allows you to access Kafka commands without specifying their full paths. After saving changes, reload the environment variables with:

source /etc/environment

You can verify that your configuration is correct by checking if `$KAFKA_HOME` is set properly:

echo $KAFKA_HOME

Step 6: Set Up Zookeeper

Zookeeper acts as a coordination service for managing distributed applications like Kafka. Before starting Kafka, you need to start Zookeeper first. Use this command to start Zookeeper server:

$KAFKA_HOME/bin/zookeeper-server-start.sh $KAFKA_HOME/config/zookeeper.properties

This command will initiate Zookeeper using its default configuration file. You should see logs indicating that Zookeeper is running properly.

Step 7: Configure and Start Kafka Server

Edit Server Properties

The next step involves configuring Kafka’s server properties. Open the `server.properties` file for editing:

nano /opt/kafka/config/server.properties

You will need to modify several key configurations in this file:

  • Brokers ID:
    broker.id=0
  • Log Directories:
    log.dirs=/tmp/kafka-logs
  • Zookeeper Connection String:
    zookeeper.connect=localhost:2181

This configuration sets up a single broker instance with a specified log directory where messages will be stored.

Start Kafka Server

You can now start the Kafka server in standalone mode using this command:

$KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties &

The ampersand (`&`) at the end runs it in the background so you can continue using your terminal.

Verify Installation

To confirm that Kafka is running correctly, you can check active processes or look at log files located in `/tmp/kafka-logs`. If everything is set up correctly, you should see entries indicating that Kafka has started successfully.

Step 8: Create Systemd Service Files

If you want Kafka and Zookeeper to start automatically on boot, it’s best practice to create systemd service files for both services.

Zookeeper Service Configuration

  1. Create a new systemd service file for Zookeeper:
    sudo nano /etc/systemd/system/zookeeper.service
  2. Add the following content:
    [Unit]
    Description=Apache Zookeeper Server
    
    [Service]
    ExecStart=/opt/kafka/bin/zookeeper-server-start.sh /opt/kafka/config/zookeeper.properties
    
    [Install]
    WantedBy=multi-user.target
            

Kakfa Service Configuration

  1. Create a new systemd service file for Kafka:
    sudo nano /etc/systemd/system/kafka.service
  2. Add this content:
    [Unit]
    Description=Apache Kafka Server
    
    [Service]
    ExecStart=/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties
    
    [Install]
    WantedBy=multi-user.target
  3. Reload systemd daemon and enable both services at startup:
    sudo systemctl daemon-reload
    sudo systemctl enable zookeeper.service kafka.service

Step 9: Test Apache Kafka Installation

Create a Topic

The next step is to create a test topic named `test-topic`. This topic will be used for sending and receiving messages. Use this command to create it:

$KAFKA_HOME/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test-topic

Produce Messages

You can now start producing messages to your topic using the console producer tool provided by Kafka. Run this command in another terminal window:

$KAFKA_HOME/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test-topic

You can type messages directly into this console; press Enter after each message to send it.

Consume Messages

The final step in testing your installation is consuming messages from `test-topic`. Open another terminal window and run this command:

$KAFKA_HOME/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test-topic --from-beginning

This command will display any messages sent to `test-topic`, allowing you to verify that everything is functioning correctly.

Congratulations! You have successfully installed Apache Kafka. Thanks for using this tutorial for installing Apache Kafka on your Fedora 41 system. For additional or useful information, we recommend you check the official Apache website.

VPS Manage Service Offer
If you don’t have time to do all of this stuff, or if this is not your area of expertise, we offer a service to do “VPS Manage Service Offer”, starting from $10 (Paypal payment). Please contact us to get the best deal!

r00t

r00t is an experienced Linux enthusiast and technical writer with a passion for open-source software. With years of hands-on experience in various Linux distributions, r00t has developed a deep understanding of the Linux ecosystem and its powerful tools. He holds certifications in SCE and has contributed to several open-source projects. r00t is dedicated to sharing her knowledge and expertise through well-researched and informative articles, helping others navigate the world of Linux with confidence.
Back to top button