How To Install Kubernetes on Fedora 43

Container orchestration has become essential for modern application deployment. Kubernetes stands at the forefront of this technology, enabling automated scaling, management, and deployment of containerized workloads. For Linux professionals and developers interested in building robust cluster infrastructure, understanding how to install Kubernetes on Fedora 43 represents a critical skill that opens doors to advanced DevOps practices and cloud-native development. This comprehensive guide walks through every step of the installation process, from system preparation through cluster validation and troubleshooting.
Prerequisites and System Requirements for Kubernetes Installation
Before embarking on your Kubernetes installation journey, it’s crucial to verify that your system meets specific hardware and software requirements. Kubernetes clusters demand consistent system specifications across all nodes to ensure reliable performance and proper cluster communication.
Hardware Specifications
Your Fedora 43 system must have adequate computational resources. The minimum recommended configuration includes 2 gigabytes of RAM per node and at least 2 CPU cores or more. For production environments and larger workloads, allocate 4 gigabytes of RAM alongside 20 to 40 gigabytes of free disk space to comfortably accommodate container images and persistent volumes. Network connectivity between all nodes represents a critical requirement—whether you’re building a single-node learning environment or a multi-node production cluster, ensure reliable network paths between machines.
Software and System Dependencies
You’ll need a fresh or recently updated Fedora 43 installation, either the Server or Workstation edition. Root access or sudo privileges are absolutely required for system-level configuration. Reliable internet connectivity is essential during the installation process, as package managers must download container images and Kubernetes components from remote repositories. Each node in your cluster requires a unique hostname and MAC address to ensure proper identification and communication within the cluster infrastructure.
Creating Your Pre-Installation Checklist
Validate that your hardware meets minimum specifications before proceeding. Check that your container runtime (which we’ll install later) is compatible with Fedora 43. Decide whether you’re building a single-node cluster for learning purposes or planning a multi-node production deployment, as this decision influences several configuration choices throughout the installation process.
Understanding Container Runtime Options for Fedora 43
Container runtimes serve as the foundational layer enabling Kubernetes to run and manage containers. Understanding your runtime options helps ensure optimal performance and compatibility with your specific use case.
CRI-O: The Native Kubernetes Container Runtime
CRI-O represents the Container Runtime Interface implementation developed specifically for Kubernetes environments. It’s lightweight, maintains CRI compliance, and follows the OCI (Open Container Initiative) standards. CRI-O integrates seamlessly with Fedora’s native package repositories, making installation straightforward and dependency management simpler. The architecture of CRI-O is designed from the ground up to work efficiently with Kubernetes, eliminating unnecessary features not needed for container orchestration. This architectural alignment translates to better performance, lower resource consumption, and fewer compatibility issues when managing Kubernetes clusters.
Containerd as an Alternative Runtime
Containerd offers another viable option for Kubernetes deployments on Fedora 43. Developed originally for Docker and now maintained as an independent project, containerd provides excellent stability and widespread adoption across the cloud-native ecosystem. It functions as a complete container runtime exposing gRPC APIs and managing the full container lifecycle. While both CRI-O and containerd work effectively with Kubernetes, CRI-O is specifically optimized for Kubernetes workloads and comes with native Fedora package support.
Why Select CRI-O for Fedora 43 Kubernetes
CRI-O integrates directly with Fedora’s package management system, allowing seamless updates and consistency with Fedora’s philosophy of providing cutting-edge open-source software. The Kubernetes-first design philosophy means each CRI-O version aligns with corresponding Kubernetes versions, simplifying version matching and ensuring compatibility. Security receives particular attention in CRI-O’s design, and the runtime benefits from Kubernetes community testing and hardening.
Preparing Your Fedora 43 System for Kubernetes
Successful Kubernetes installation begins with proper system preparation. Each step in this section creates essential prerequisites for cluster stability and functionality.
Update All System Packages
Begin by updating your entire Fedora 43 system to ensure all packages include the latest security patches and bug fixes. Execute the following command to perform a complete system update:
sudo dnf update
This command updates DNF’s package cache and upgrades all installed packages to their newest available versions. After the update completes, you may need to reboot your system to ensure kernel updates take effect properly. The reboot step can technically be deferred until after the next configuration step, but it’s generally recommended to perform it now for maximum stability.
Disable Swap Memory Completely
Kubernetes requires swap memory to be disabled on all cluster nodes. Kubeadm will issue warnings during installation if swap is detected, and kubelet may refuse to start properly with swap enabled. Modern Fedora systems use zram by default, which functions as compressed RAM-based storage instead of traditional disk-based swap.
Execute these commands to stop and disable zram:
sudo systemctl stop swap-create@zram0
sudo dnf remove zram-generator-defaults
sudo reboot now
The system will reboot and permanently disable swap functionality. After the reboot, verify that swap has been successfully disabled by running:
free -h
The swap row should display zeros in all columns, confirming successful deactivation. This requirement exists because Kubernetes relies on precise resource management, and swap memory can cause unpredictable performance degradation and scheduling conflicts.
Configure SELinux for Kubernetes
SELinux provides additional security by enforcing mandatory access controls. Most Kubernetes guides recommend disabling SELinux to simplify initial setup, though Kubernetes operates effectively with SELinux enabled when properly configured. For learning and development environments, permissive mode offers a good middle ground—security policies are enforced and logged but don’t prevent operations.
To place SELinux in permissive mode temporarily:
sudo setenforce 0
To make this change permanent across system reboots:
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
After this change, reboot to ensure the new SELinux configuration takes effect. Production environments should invest time in proper SELinux policy configuration to maintain enhanced security while running Kubernetes.
Configure Firewall Rules for Kubernetes Communication
Fedora 43 uses firewalld as its default firewall management service. While disabling the firewall simplifies initial setup for learning environments, production deployments should configure specific firewall rules. For development purposes, disable firewalld:
sudo systemctl disable --now firewalld
For production environments, consult Kubernetes networking documentation to identify and open specific ports required for control plane and worker node communication. The Kubernetes project maintains comprehensive documentation on required ports and protocols at https://kubernetes.io/docs/reference/networking/ports-and-protocols/.
Install Required Networking Utilities
Modern Kubernetes packages include necessary networking utilities by default, but it’s good practice to explicitly install iptables and iproute-tc for complete networking functionality:
sudo dnf install iptables iproute-tc
These tools provide essential networking capabilities that container runtimes and network plugins rely upon for proper cluster communication.
Configure IPv4 Forwarding and Bridge Filters
Kubernetes uses Linux kernel networking features that require specific configuration. Enable kernel modules and configure sysctl parameters:
sudo cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
Load these modules immediately:
sudo modprobe overlay
sudo modprobe br_netfilter
Configure required sysctl parameters:
sudo cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
Apply these parameters without requiring a reboot:
sudo sysctl --system
Verify the configuration applied correctly:
lsmod | grep br_netfilter
lsmod | grep overlay
Both commands should return output confirming the modules are loaded. Verify sysctl settings:
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
All three parameters should return values of 1, confirming proper kernel configuration.
Selecting and Installing the CRI-O Container Runtime
The container runtime must be installed and operational before Kubernetes components can be initialized. CRI-O handles all container lifecycle operations within your Kubernetes cluster.
Check Available CRI-O Versions
List available CRI-O versions compatible with Fedora 43:
sudo dnf list cri-o*
This command displays available CRI-O packages. The version you select should match your target Kubernetes version—CRI-O version 1.31 works with Kubernetes 1.31, for example.
Install CRI-O and Container Networking Plugins
Install CRI-O with its matching version and container networking plugins:
sudo dnf install cri-o1.31 containernetworking-plugins
Replace “1.31” with the specific version matching your chosen Kubernetes version. The containernetworking-plugins package provides essential container network interface functionality.
Start and Enable CRI-O Service
Enable CRI-O to start automatically at system boot:
sudo systemctl enable crio
Start the CRI-O service immediately:
sudo systemctl start crio
Verify CRI-O is running properly:
sudo systemctl status crio
The status output should show “active (running)” in green text.
Discovering Available Kubernetes Versions
Kubernetes maintains multiple versions in Fedora’s repositories, each receiving different levels of community support. Understanding version availability helps ensure you select a well-supported release.
List Available Kubernetes Versions
Display available versioned Kubernetes packages:
sudo dnf list kubernetes1.??
This command lists all available major.minor version combinations. For example, Fedora 43 might offer versions 1.29, 1.30, 1.31, and 1.32.
Understand Version Support and Lifecycle
Kubernetes follows a strict versioning policy. Each release receives security updates for approximately one year from its general availability date. The Kubernetes project publishes release history and end-of-life dates at https://www.kubernetes.io/releases/. Selecting versions within their active support window ensures you receive security patches and bug fixes.
Installing Kubernetes Components
Kubernetes consists of three essential command-line tools: kubeadm handles cluster initialization, kubelet manages container runtime on each node, and kubectl allows cluster administration and application management.
Understanding Each Kubernetes Component
kubeadm bootstraps the Kubernetes cluster by generating certificates, configuring the control plane, and initializing the cluster. kubelet runs on every node (both control plane and worker nodes) as a system daemon managing the container runtime and ensuring pods run according to specifications. kubectl serves as the command-line interface for cluster administration, allowing deployment creation, pod management, and cluster status monitoring.
Install Kubernetes with DNF
Install Kubernetes version 1.31 along with all necessary components:
sudo dnf install kubernetes1.31 kubernetes1.31-kubeadm kubernetes1.31-client
Replace “1.31” with your selected version number. This installs kubelet, kubeadm, and kubectl from versioned packages, ensuring all components align at the same version level. Version consistency prevents unexpected behavior and compatibility issues.
Enable and Start Kubelet Service
Enable kubelet to start automatically at system boot:
sudo systemctl enable kubelet
Start kubelet immediately:
sudo systemctl start kubelet
Note that kubelet will enter a crash loop at this point—this is expected behavior. Kubelet cannot function properly until the cluster is initialized by kubeadm in the next step, so don’t be alarmed by repeated restart attempts.
Pre-Pulling Kubernetes System Container Images
Kubernetes uses multiple system container images for cluster components. Pre-pulling these images accelerates subsequent steps and ensures image availability.
Pull System Container Images
Execute kubeadm’s image pull command:
sudo kubeadm config images pull
This command retrieves all required container images from the default registry. The command displays download progress for each image and verifies successful retrieval. System images typically include kube-apiserver, kube-controller-manager, kube-scheduler, kube-proxy, pause, etcd, and coredns.
Verify Downloaded Images
List all pulled container images:
sudo crictl images
This command shows all container images available in your CRI-O runtime storage. Verify that all expected system images are present before proceeding to cluster initialization.
Initializing Your Kubernetes Cluster
Cluster initialization represents the critical step where kubeadm generates certificates, configures the control plane, and establishes the foundation for your entire Kubernetes deployment.
Understanding the Initialization Process
Kubeadm init performs numerous operations: generating cluster certificates and keys, configuring the Kubernetes API server, deploying the controller manager and scheduler, initializing etcd for cluster state storage, and configuring kubelet to function with the initialized control plane. The process generates a token that allows worker nodes to join the cluster securely.
Execute Cluster Initialization
Initialize the cluster with the following command:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
The --pod-network-cidr parameter specifies the IP address range used for pod-to-pod communication. The value 10.244.0.0/16 suits most learning and production environments. The initialization process takes several minutes. Upon successful completion, kubeadm displays initialization success and provides critical configuration instructions.
Save the Join Command for Worker Nodes
The initialization output includes a kubeadm join command essential for adding worker nodes. This command contains authentication tokens and discovery information. Copy and save this command in a secure location:
kubeadm join 192.168.1.100:6443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:1234567890abcdefghijklmnopqrstuvwxyz
Tokens expire after 24 hours. If you need to add worker nodes after token expiration, regenerate the token using:
sudo kubeadm token create --print-join-command
Configuring kubectl Access for Non-Root Users
The kubectl command-line tool must be configured to communicate with your Kubernetes cluster. By default, cluster credentials reside in /etc/kubernetes/admin.conf, accessible only to the root user.
Set Up kubectl Configuration
Create the .kube directory in your home folder:
mkdir -p $HOME/.kube
Copy the cluster configuration file:
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
Set appropriate file ownership to your user account:
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Set restrictive file permissions for security:
chmod 600 $HOME/.kube/config
Verify kubectl Connectivity
Test kubectl connection to the cluster:
kubectl cluster-info
This command should display connection information for the Kubernetes control plane. Request the cluster node list:
kubectl get nodes
You should see one node listed with a status of NotReady—this is expected because the pod network plugin hasn’t been installed yet.
Installing the Flannel Pod Network Plugin
Kubernetes requires a network plugin to enable pod-to-pod communication across nodes. Flannel provides a simple, effective networking solution suitable for learning and production environments.
Understanding Pod Network Plugins
The Container Network Interface (CNI) standard defines how container runtimes and Kubernetes interact with network plugins. Flannel implements CNI by creating overlay networks that facilitate communication between pods regardless of their underlying physical network. Without a network plugin, pods cannot communicate with each other, and cluster components cannot fully function.
Deploy Flannel Network Plugin
Apply the Flannel manifest to your cluster:
kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
This command deploys Flannel as daemonset pods running on all cluster nodes. Flannel automatically creates network interfaces and routes enabling pod communication. The deployment takes a minute or two to complete.
Verify Network Plugin Installation
Check the status of system pods:
kubectl get pods --all-namespaces
All pods in the kube-system and kube-flannel-ds namespaces should transition to Running status within a few minutes. CoreDNS pods provide internal cluster DNS resolution. If CoreDNS pods show CrashLoopBackOff status, DNS configuration issues may exist—this typically occurs in virtual machine environments with specific DNS resolver configurations.
Configuring the Control Plane Node for Workload Deployment
By default, Kubernetes taints control plane nodes to prevent general workload deployment on these critical cluster components. For single-node learning clusters, you typically want to remove this taint.
Understanding Node Taints
Taints are key-value pairs applied to nodes that prevent pod scheduling unless pods explicitly tolerate those taints. The default control plane taint is node-role.kubernetes.io/control-plane:NoSchedule, which prevents most pods from running on control plane nodes.
Remove the Control Plane Taint
For development and learning environments with single-node clusters, remove the control plane taint:
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
This command removes the taint from all nodes, allowing regular workloads to run on the control plane. In production multi-node environments, keep this taint to ensure control plane stability.
Verify Node Status
Check that your node now shows Ready status:
kubectl get nodes
The node status should change from NotReady to Ready after the network plugin fully initializes. This process typically takes 1-3 minutes.
Joining Worker Nodes to the Cluster (Optional)
For multi-node clusters, additional worker nodes must join the control plane through a secure bootstrapping process.
Prepare Worker Nodes
On each additional machine you want to add as a worker node, repeat the system preparation steps (Steps 1 through the installation of Kubernetes components). Do not initialize a new cluster on worker nodes—they join an existing cluster. Ensure network connectivity between worker nodes and the control plane node.
Execute the Join Command
On each worker node, execute the saved kubeadm join command:
sudo kubeadm join 192.168.1.100:6443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:1234567890abcdefghijklmnopqrstuvwxyz
Replace the IP address, token, and hash with values from your initialization output. The join process takes a few minutes as the node connects to the cluster, downloads necessary components, and registers itself.
Verify Worker Node Addition
From the control plane node, verify the worker node appears in the cluster:
kubectl get nodes
Worker nodes should appear in the node list. Initially they may show NotReady status while cluster networking finalizes—typically they transition to Ready within 2-5 minutes.
Testing Your Kubernetes Cluster
Verification steps confirm that your cluster functions properly and can run containerized applications.
Deploy a Test Application
Create a simple test deployment using nginx:
kubectl create deployment test-nginx --image=nginx:latest
This command creates a deployment running a single nginx container. Check deployment status:
kubectl get deployments
The deployment should show one replica with one ready.
Scale the Application
Scale the deployment to multiple replicas:
kubectl scale deployment test-nginx --replicas=3
Verify pod creation:
kubectl get pods
Three nginx pods should appear in the output, distributed across available nodes.
Expose the Application
Create a service exposing the deployment:
kubectl expose deployment test-nginx --port=80 --target-port=80 --type=NodePort
Check the service:
kubectl get svc
Note the NodePort value (typically a high port number like 30xxx). Access the application by navigating to http://localhost:[NodePort] or http://[node-ip]:[NodePort].
Cleanup Test Resources
Remove test resources:
kubectl delete deployment test-nginx
kubectl delete svc test-nginx
These commands clean up temporary test resources from your cluster.
Troubleshooting Common Kubernetes Issues
Understanding common problems and their solutions accelerates resolution when issues occur.
CrashLoopBackOff Errors
This status indicates a pod repeatedly crashes immediately after starting. Causes include insufficient resource allocation, missing required volumes, or application configuration errors. Investigate using:
kubectl describe pod [pod-name]
View pod logs:
kubectl logs [pod-name]
If the issue is with CoreDNS specifically, it often relates to DNS resolver configuration on the host. Edit the CoreDNS configmap:
kubectl edit configmap coredns -n kube-system
Replace forward . /etc/resolv.conf with your network’s DNS server IP address. Alternatively, disable systemd-resolved stub listener:
sudo mkdir -p /etc/systemd/resolved.conf.d/
sudo cat <<EOF | sudo tee /etc/systemd/resolved.conf.d/stub-listener.conf
[Resolve]
DNSStubListener=no
EOF
sudo systemctl restart systemd-resolved
ImagePullBackOff Errors
This error occurs when the kubelet cannot download the specified container image. Common causes include incorrect image names, non-existent image tags, or registry authentication issues. Verify the image exists:
kubectl describe pod [pod-name]
Check for typos in image specifications. For private registries, create docker registry secrets and reference them in pod specifications.
Node NotReady Status
Nodes stuck in NotReady status may have network plugin issues, kubelet service problems, or container runtime connectivity issues. Check node conditions:
kubectl describe node [node-name]
This reveals specific condition failures. Verify kubelet service:
sudo systemctl status kubelet
Check kubelet logs:
sudo journalctl -u kubelet -n 50
Restart kubelet if necessary:
sudo systemctl restart kubelet
Pod Stuck in Pending Status
Pods remain pending when they cannot be scheduled due to insufficient node resources, node selector constraints, or persistent volume unavailability. Check pod status:
kubectl describe pod [pod-name]
Review the Events section for scheduling failure reasons. Check node resource availability:
kubectl describe nodes
The Allocated Resources section shows used and available resources.
Best Practices for Kubernetes on Fedora 43
Following established best practices ensures stable, secure, and maintainable Kubernetes deployments.
Security Considerations
Maintain SELinux enabled in production environments, investing time in proper policy configuration. Implement network policies to control traffic between pods. Configure RBAC (Role-Based Access Control) to limit user and service account permissions. Keep all Kubernetes components, container runtimes, and operating system packages updated with security patches. Run security scans on container images before deployment.
Resource Management
Define resource requests and limits for all containers to prevent resource starvation and ensure fair resource allocation. Monitor cluster resource usage regularly using kubectl top or third-party monitoring solutions. Plan scaling strategies for workload growth. Consider implementing horizontal and vertical pod autoscaling for dynamic workload management.
Maintenance and Updates
Kubernetes clusters require regular maintenance. Update Kubernetes components following the upgrade path documented by the Kubernetes project. Back up cluster configuration regularly. Establish procedures for controlled updates that minimize application disruption. Use Fedora’s versionlock feature to prevent automatic updates during critical periods:
sudo dnf versionlock add kubernetes*-1.31.* cri-o-1.31.*
This locks Kubernetes and CRI-O at the specified version while allowing patch updates.
Production vs. Development Environments
Development environments prioritize ease of setup and quick iteration. Production environments require high availability, disaster recovery planning, and security hardening. Production clusters should implement multiple control plane nodes, persistent storage solutions, ingress controllers for external access, and comprehensive monitoring and logging. Development single-node clusters serve learning purposes but shouldn’t run production workloads.
Congratulations! You have successfully installed Kubernetes. Thanks for using this tutorial for installing Kubernetes open source system for automating deployment, scaling, and management of containerized on your Fedora 43 Linux system. For additional or useful information, we recommend you check the official Kubernetes website.