AlmaLinuxRHEL Based

How To Install Kubernetes on AlmaLinux 10

Install Kubernetes on AlmaLinux 10

Container orchestration has become essential for modern application deployment and management. Kubernetes stands as the leading platform for automating containerized applications at scale. AlmaLinux 10, with its enterprise-grade stability and Red Hat Enterprise Linux compatibility, provides an excellent foundation for Kubernetes deployments.

This comprehensive guide walks you through installing Kubernetes on AlmaLinux 10 from scratch. You’ll learn how to configure a production-ready cluster with proper security measures, networking setup, and best practices. Whether you’re deploying a single-node development environment or a multi-node production cluster, this tutorial covers every essential step.

The installation process involves system preparation, container runtime configuration, Kubernetes components installation, and cluster initialization. We’ll also cover troubleshooting common issues and implementing security best practices to ensure your cluster operates reliably and securely.

Prerequisites and System Requirements

Before beginning the Kubernetes installation on AlmaLinux 10, ensure your system meets the minimum hardware and software requirements for stable operation.

Hardware Requirements:
Your system needs at least 2 CPU cores, though 4 or more cores are recommended for optimal performance. Memory requirements start at 4GB RAM minimum, but 8GB or more provides better stability for production workloads. Storage space should be at least 50GB available disk space to accommodate the operating system, Kubernetes components, and container images.

Software Prerequisites:
Start with a fresh AlmaLinux 10 installation with all latest updates applied. You’ll need root or sudo administrative privileges throughout the installation process. Container runtime compatibility is essential, as Kubernetes requires Docker, containerd, or another compatible container runtime.

Network Configuration:
Each cluster node requires a unique hostname and static IP address for reliable communication. Network connectivity between all nodes is mandatory, with specific ports open for Kubernetes components. DNS resolution should work properly across all nodes to ensure service discovery functions correctly.

SELinux Considerations:
AlmaLinux 10 ships with SELinux enabled by default. While you can configure Kubernetes to work with SELinux in enforcing mode, many installations set it to permissive mode initially to avoid configuration complexity.

User Account Setup:
Ensure you have a non-root user account with sudo privileges for running kubectl commands after installation. This follows security best practices and prevents potential issues with cluster management.

System Preparation and Initial Configuration

Proper system preparation forms the foundation of a successful Kubernetes deployment. This section covers essential configuration steps that must be completed before installing Kubernetes components.

Hostname Configuration:
Set unique hostnames for each cluster node using the hostnamectl command. Update the /etc/hosts file to include entries for all cluster nodes with their respective IP addresses and hostnames. This ensures proper name resolution across the cluster without relying solely on external DNS services.

sudo hostnamectl set-hostname k8s-master
echo "192.168.1.10 k8s-master" | sudo tee -a /etc/hosts
echo "192.168.1.11 k8s-worker1" | sudo tee -a /etc/hosts

SELinux Configuration:
Configure SELinux to permissive mode to avoid potential conflicts during installation. Use the setenforce command to change the current mode, then modify the configuration file for permanent changes.

sudo setenforce 0
sudo sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux

Verify the SELinux status using sestatus to confirm the changes took effect.

Firewall Configuration:
Configure firewall rules to allow necessary Kubernetes traffic. Master nodes require ports 6443 (API server), 2379-2380 (etcd), 10250 (kubelet), 10251 (kube-scheduler), 10259 (kube-controller-manager), 10257 (kube-controller-manager), 179 (BGP), and 4789 (VXLAN).

sudo firewall-cmd --permanent --add-port=6443/tcp
sudo firewall-cmd --permanent --add-port=2379-2380/tcp
sudo firewall-cmd --permanent --add-port=10250/tcp
sudo firewall-cmd --permanent --add-port=10251/tcp
sudo firewall-cmd --permanent --add-port=10259/tcp
sudo firewall-cmd --permanent --add-port=10257/tcp
sudo firewall-cmd --permanent --add-port=179/tcp
sudo firewall-cmd --permanent --add-port=4789/udp

Worker nodes need ports 179 (BGP), 10250 (kubelet), 30000-32767 (NodePort services), and 4789 (VXLAN).

Swap Memory Disable:
Kubernetes requires swap memory to be disabled for proper operation. Disable swap temporarily using swapoff -a, then comment out swap entries in /etc/fstab to make the change permanent.

sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

System Updates:
Update all system packages to ensure you have the latest security patches and bug fixes before proceeding with Kubernetes installation.

sudo dnf update -y
sudo dnf install -y curl wget vim

Container Runtime Installation (Docker/containerd)

Kubernetes requires a container runtime to manage containers on each node. Docker remains popular, though containerd is becoming the preferred choice for new installations due to its lighter footprint and direct CRI compatibility.

Docker Installation:
Add the Docker repository to your system and install Docker CE. Create the repository configuration file and add the official Docker repository.

sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install -y docker-ce docker-ce-cli containerd.io

Start and enable the Docker service to ensure it runs automatically at boot time.

sudo systemctl start docker
sudo systemctl enable docker

Container Runtime Interface (CRI):
Since Kubernetes v1.24 removed built-in Docker support, you need cri-dockerd to provide the CRI interface. Install cri-dockerd to bridge Docker with Kubernetes.

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.4.0/cri-dockerd-0.4.0.amd64.tgz
tar -xzf cri-dockerd-0.4.0.amd64.tgz
sudo install -o root -g root -m 0755 cri-dockerd/cri-dockerd /usr/local/bin/cri-dockerd

Create systemd service files for cri-dockerd to manage the service properly.

Docker Daemon Configuration:
Configure Docker daemon settings for optimal Kubernetes integration. Create a daemon.json file with appropriate cgroup driver settings.

sudo mkdir -p /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

Restart Docker to apply the configuration changes.

Verification Steps:
Test the container runtime installation by running a simple container and verifying Docker service status.

sudo docker run hello-world
sudo systemctl status docker

This confirms that Docker is properly installed and can create and run containers successfully.

Kubernetes Components Installation

Installing Kubernetes involves three essential components: kubeadm for cluster management, kubelet as the node agent, and kubectl as the command-line interface.

Repository Setup:
Create a Kubernetes repository configuration file to access the official packages. The repository URL and GPG keys ensure package authenticity and security.

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

Package Installation:
Install the three core Kubernetes packages using dnf with the --disableexcludes flag to bypass the exclusion settings.

sudo dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

The kubeadm tool handles cluster initialization and node joining operations. Kubelet runs on every node and manages pod lifecycle, container execution, and node registration. Kubectl provides the command-line interface for cluster management and application deployment.

Service Configuration:
Enable the kubelet service to start automatically at boot time, though it won’t start successfully until the cluster is initialized.

sudo systemctl enable kubelet

The kubelet service will remain in a crash loop until kubeadm initializes the cluster and provides the necessary configuration files.

Version Management:
Check installed package versions to ensure compatibility across all cluster nodes.

kubeadm version
kubectl version --client
kubelet --version

All nodes in the cluster should run the same Kubernetes version to avoid compatibility issues and ensure proper cluster operation.

Package Exclusion:
Prevent accidental updates to Kubernetes packages by keeping the exclusion settings in the repository configuration. This prevents package managers from automatically updating Kubernetes components, which could break cluster functionality.

Installation Verification:
Verify that all components are properly installed and accessible from the command line. Check that the binaries are in the system PATH and execute without errors.

The installation process creates configuration files in /etc/kubernetes/ and /var/lib/kubelet/ directories, which will be populated during cluster initialization.

Kubernetes Cluster Initialization

Cluster initialization creates the control plane components and generates the certificates and configuration files necessary for cluster operation.

Master Node Initialization:
Initialize the Kubernetes cluster on the designated master node using kubeadm. Specify the CRI socket path for Docker integration and configure the pod network CIDR.

sudo kubeadm init --cri-socket /run/cri-dockerd.sock --pod-network-cidr=192.168.0.0/16

The initialization process creates the API server, etcd database, controller manager, and scheduler components. This process takes several minutes to complete and generates essential cluster information.

Cluster Configuration Export:
After successful initialization, configure kubectl access for the regular user account.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

This configuration allows you to manage the cluster without using sudo for kubectl commands, following security best practices.

Initial Cluster Status:
Check the cluster status immediately after initialization. The master node will show as “NotReady” until a pod network is installed.

kubectl get nodes
kubectl get pods --all-namespaces

System pods should be running in the kube-system namespace, including etcd, API server, controller manager, and scheduler.

Join Command Preparation:
Save the kubeadm join command output from the initialization process. This command contains a token and certificate hash needed for adding worker nodes.

kubeadm token create --print-join-command

Tokens expire after 24 hours by default, so regenerate them if needed when adding nodes later.

Certificate Management:
The initialization process generates all necessary TLS certificates for secure cluster communication. These certificates are stored in /etc/kubernetes/pki/ and have specific expiration dates.

Security Considerations:
The admin.conf file contains full cluster access credentials. Protect this file and only copy it to trusted user accounts. Consider setting up additional users with limited permissions for day-to-day operations.

Troubleshooting Initialization:
If initialization fails, check system logs using journalctl -xeu kubelet and kubeadm logs. Common issues include firewall blocking, incorrect CRI configuration, or insufficient system resources.

Reset the cluster if needed using kubeadm reset and address any configuration issues before retrying initialization.

Pod Network Configuration

Kubernetes requires a pod network add-on to enable communication between pods across different nodes. Without a network plugin, pods remain isolated and cannot communicate properly.

Network Add-on Necessity:
The Container Network Interface (CNI) provides networking capabilities for pods. Popular options include Calico, Flannel, and Weave Net, each with different features and performance characteristics.

Calico Installation:
Calico provides both networking and network policy features, making it suitable for production environments.

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Calico automatically detects the pod CIDR configured during cluster initialization and configures routing accordingly.

Flannel Installation:
Flannel offers simpler networking with VXLAN overlay networking, making it easier to set up in basic environments.

kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

Network Verification:
After installing the network add-on, verify that all nodes show “Ready” status and system pods are running properly.

kubectl get nodes
kubectl get pods --all-namespaces

All system pods should transition to “Running” status, and nodes should become “Ready” once networking is established.

Network Policy Considerations:
Calico supports Kubernetes Network Policies for micro-segmentation and security controls. Plan your network policies early in the deployment process to implement proper security boundaries.

Troubleshooting Network Issues:
Common networking problems include incorrect CIDR configuration, firewall blocking, or conflicting network ranges. Check pod logs and node status to identify connectivity issues.

Alternative Network Solutions:
Consider different network plugins based on your requirements. Calico excels in security and policy enforcement, Flannel provides simplicity, and Weave Net offers easy setup with built-in encryption options.

Monitor network performance and adjust configurations as needed for your specific workload requirements and security policies.

Adding Worker Nodes to the Cluster

Expanding your Kubernetes cluster with worker nodes provides additional compute capacity and enables workload distribution across multiple machines.

Worker Node Preparation:
Complete all prerequisite steps on worker nodes, including system updates, container runtime installation, and Kubernetes package installation. Ensure firewall configuration allows the required worker node ports.

Node Joining Process:
Use the kubeadm join command saved from the master initialization process to add worker nodes to the cluster.

sudo kubeadm join <master-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash> --cri-socket /run/cri-dockerd.sock

Replace the placeholders with actual values from your cluster initialization output.

Token Management:
If the original token has expired, generate a new one on the master node:

kubeadm token create --print-join-command

Tokens provide temporary access for joining nodes and should be treated as sensitive information.

Node Verification:
After joining, verify that the new node appears in the cluster and reaches “Ready” status.

kubectl get nodes
kubectl describe node <worker-node-name>

The node should show appropriate capacity information and running system pods.

Troubleshooting Node Issues:
Common problems include network connectivity issues, certificate validation failures, or mismatched Kubernetes versions. Check kubelet logs on both master and worker nodes for error messages.

Node Management:
Use kubectl commands to manage nodes, including labeling for workload scheduling and cordoning for maintenance activities.

kubectl label node <node-name> node-role.kubernetes.io/worker=worker
kubectl get nodes --show-labels

Proper node labeling helps with workload placement and cluster organization.

Cluster Verification and Testing

Comprehensive testing ensures your Kubernetes cluster functions correctly and can handle application workloads reliably.

Basic Cluster Tests:
Verify cluster components and node status using kubectl commands.

kubectl get nodes -o wide
kubectl get pods --all-namespaces
kubectl cluster-info

All nodes should show “Ready” status, and system pods should be running without errors.

Application Deployment Testing:
Deploy a test application to verify pod scheduling, service creation, and network connectivity.

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get services

Networking Tests:
Test inter-pod communication and service discovery by deploying multiple pods and verifying connectivity between them.

kubectl run test-pod --image=busybox --restart=Never --rm -it -- nslookup nginx

Resource Validation:
Check cluster resource capacity and utilization to ensure proper resource allocation.

kubectl top nodes
kubectl describe nodes

Performance Benchmarking:
Run basic performance tests to establish baseline metrics for monitoring and capacity planning purposes.

These verification steps confirm that your cluster is ready for production workloads and can scale applications effectively.

Best Practices and Security Configuration

Implementing security best practices and operational procedures ensures your Kubernetes cluster remains secure and maintainable in production environments.

Security Best Practices:
Configure Role-Based Access Control (RBAC) to limit user permissions and implement the principle of least privilege. Create service accounts with specific permissions rather than using default accounts with broad access.

kubectl create serviceaccount app-service-account
kubectl create clusterrole app-reader --verb=get,list,watch --resource=pods,services
kubectl create clusterrolebinding app-binding --clusterrole=app-reader --serviceaccount=default:app-service-account

Network Security:
Implement network policies to control traffic between pods and namespaces. Define ingress and egress rules that restrict communication to only necessary connections.

Resource Management:
Configure resource requests and limits for all workloads to prevent resource exhaustion and ensure fair resource allocation.

apiVersion: v1
kind: Pod
spec:
  containers:
  - name: app
    resources:
      requests:
        memory: "256Mi"
        cpu: "250m"
      limits:
        memory: "512Mi"
        cpu: "500m"

Pod Security Standards:
Implement pod security standards to enforce security policies at the cluster level. Configure admission controllers to automatically enforce security requirements.

High Availability Setup:
Plan for high availability by deploying multiple master nodes and implementing proper load balancing. Regular etcd backups ensure data recovery capabilities.

Monitoring and Logging:
Deploy monitoring solutions like Prometheus and Grafana to track cluster health and performance metrics. Implement centralized logging to troubleshoot issues effectively.

Maintenance Procedures:
Establish regular maintenance schedules for updates, certificate rotation, and security patches. Document procedures for common operational tasks and emergency recovery.

Troubleshooting Common Issues

Understanding common Kubernetes installation and operational issues helps maintain cluster stability and reduces downtime.

Installation Problems:
Package dependency conflicts often occur when repository configurations are incorrect or when mixing different package sources. Verify repository settings and clean package cache if needed.

sudo dnf clean all
sudo dnf makecache

Container Runtime Issues:
Docker or containerd failures can prevent pods from starting. Check runtime status and logs to identify configuration problems.

sudo systemctl status docker
sudo journalctl -u docker

Network Connectivity Problems:
Pod networking issues often result from firewall configurations, incorrect CIDR settings, or CNI plugin problems. Verify firewall rules and network plugin status.

Certificate Expiration:
Kubernetes certificates expire periodically and can cause cluster access issues. Check certificate status and renew as needed.

kubeadm certs check-expiration
kubeadm certs renew all

Resource Exhaustion:
Insufficient resources can cause pod scheduling failures and performance degradation. Monitor resource usage and add capacity when needed.

Diagnostic Tools:
Use kubectl debugging commands to gather information about cluster problems.

kubectl describe node <node-name>
kubectl logs <pod-name> -n <namespace>
kubectl get events --sort-by=.metadata.creationTimestamp

Recovery Procedures:
Document recovery procedures for common failure scenarios, including node replacement, cluster reset, and configuration restoration.

Keep regular backups of cluster configuration and data to ensure quick recovery from major failures.

Congratulations! You have successfully installed Kubernetes. Thanks for using this tutorial for installing Kubernetes on your AlmaLinux OS 10 system. For additional help or useful information, we recommend you check the official Kubernetes website.

VPS Manage Service Offer
If you don’t have time to do all of this stuff, or if this is not your area of expertise, we offer a service to do “VPS Manage Service Offer”, starting from $10 (Paypal payment). Please contact us to get the best deal!

r00t

r00t is an experienced Linux enthusiast and technical writer with a passion for open-source software. With years of hands-on experience in various Linux distributions, r00t has developed a deep understanding of the Linux ecosystem and its powerful tools. He holds certifications in SCE and has contributed to several open-source projects. r00t is dedicated to sharing her knowledge and expertise through well-researched and informative articles, helping others navigate the world of Linux with confidence.
Back to top button