RHEL BasedRocky Linux

How To Install OpenStack on Rocky Linux 10

Install OpenStack on Rocky Linux 10

OpenStack represents the pinnacle of open-source cloud computing platforms, powering countless Infrastructure-as-a-Service (IaaS) deployments across enterprises worldwide. Rocky Linux 10, with its enterprise-grade stability and RHEL compatibility, provides an ideal foundation for OpenStack installations. This comprehensive guide delivers step-by-step instructions for deploying a production-ready OpenStack environment on Rocky Linux 10, covering everything from initial system preparation to launching your first virtual machine instances.

Table of Contents

Understanding OpenStack Architecture

Core Service Components

OpenStack operates as a collection of interrelated services that work together to provide comprehensive cloud infrastructure capabilities. Keystone serves as the identity and authentication service, managing users, projects, and roles across the entire platform. The Glance image service stores and manages virtual machine templates, enabling rapid instance deployment.

Nova functions as the primary compute service, handling virtual machine lifecycle management including creation, scheduling, and termination. Neutron provides software-defined networking capabilities, managing virtual networks, routers, and security groups. Cinder delivers block storage services for persistent data volumes, while Horizon offers a web-based dashboard for administrative tasks.

The Heat orchestration service automates infrastructure deployment through templates, similar to AWS CloudFormation. These components communicate through REST APIs and message queuing systems, creating a unified cloud platform capable of scaling from small deployments to massive public cloud infrastructures.

Service Communication Framework

Each OpenStack service exposes RESTful API endpoints that enable programmatic interaction and service integration. The architecture relies on a central message queue system, typically RabbitMQ, for asynchronous communication between services. Database services store persistent configuration data, user information, and resource metadata across all components.

System Requirements and Prerequisites

Hardware Specifications

Production OpenStack deployments require substantial computing resources to operate effectively. The controller node needs a minimum of 8 CPU cores, 16GB RAM, and 100GB SSD storage for optimal performance. However, for production environments, 16 or more CPU cores with 32GB+ RAM provides better performance margins.

Storage planning requires careful consideration of different disk types and purposes. Separate disks should be allocated for the operating system, database storage, and virtual machine storage to prevent I/O contention. Multiple network interfaces enable proper separation of management, storage, and provider networks.

Rocky Linux 10 System Requirements

Rocky Linux 10 introduces enhanced hardware requirements reflecting modern computing standards. The operating system requires CPU architecture support for x86_64-v3 with AVX/AVX2 instruction sets. Memory requirements start at 8GB minimum, though 16GB provides better performance for OpenStack workloads.

Disk space allocation needs 10GB minimum for base installations, expanding to 40GB+ when including GUI components. These specifications ensure compatibility with OpenStack’s resource-intensive services and provide adequate overhead for system operations.

Network Architecture Planning

Network design forms the foundation of any successful OpenStack deployment. The management network (typically 192.168.1.0/24) handles inter-service communication and administrative access. Provider networks enable external connectivity for virtual machine instances.

Overlay networks using VXLAN or GRE protocols provide tenant isolation in multi-tenant environments. VLAN tagging considerations become crucial when implementing network segmentation across multiple tenants. Proper network planning prevents connectivity issues and security vulnerabilities later in the deployment process.

Pre-Installation System Setup

Initial Rocky Linux 10 Configuration

Begin with a fresh minimal installation of Rocky Linux 10 to avoid package conflicts and unnecessary services. Configure the system hostname using a fully qualified domain name that reflects its role in the OpenStack deployment. Network interface configuration requires static IP addressing with proper DNS resolution.

# Set hostname
sudo hostnamectl set-hostname controller.openstack.local

# Configure static network (example for eth0)
sudo nmcli connection modify eth0 ipv4.addresses 192.168.1.10/24
sudo nmcli connection modify eth0 ipv4.gateway 192.168.1.1
sudo nmcli connection modify eth0 ipv4.dns 8.8.8.8
sudo nmcli connection modify eth0 ipv4.method manual
sudo nmcli connection up eth0

Time synchronization using chrony ensures accurate timestamps across all OpenStack services. Configure chrony to use reliable NTP servers and enable automatic time adjustment.

Security Configuration

OpenStack installation requires specific security configurations during the setup phase. Set SELinux to permissive mode temporarily to prevent installation conflicts. Disable firewall services initially, as OpenStack components will configure their own security rules.

# Configure SELinux
sudo setenforce 0
sudo sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config

# Disable firewall temporarily
sudo systemctl stop firewalld
sudo systemctl disable firewalld

SSH configuration should include key-based authentication and proper security settings for remote administration. Create dedicated user accounts with appropriate sudo privileges for OpenStack management tasks.

Repository Configuration

Rocky Linux 10 requires additional repositories for OpenStack packages and dependencies. Enable the PowerTools (CRB) repository to access development libraries. Add the EPEL repository for extended package collections.

# Enable PowerTools/CRB repository
sudo dnf config-manager --set-enabled crb

# Install EPEL repository
sudo dnf install -y epel-release

# Add OpenStack Yoga repository
sudo dnf install -y https://repos.fedorapeople.org/repos/openstack/openstack-yoga/rdo-release-yoga-1.el9.noarch.rpm

# Update system packages
sudo dnf update -y

The OpenStack Yoga repository provides the latest stable packages optimized for RHEL-based distributions. System updates ensure compatibility with current security patches and dependency requirements.

Essential Package Installation

Install fundamental packages required for OpenStack operations. Python 3 and pip provide the runtime environment for OpenStack services. Database client libraries enable connectivity to MariaDB and other database systems.

# Install essential packages
sudo dnf install -y python3-openstackclient python3-pip
sudo dnf install -y mariadb-server python3-PyMySQL
sudo dnf install -y vim wget curl net-tools

Network utilities assist with troubleshooting connectivity issues. Text editors and system administration tools support configuration file management and system monitoring tasks.

Database and Message Queue Setup

MariaDB Database Configuration

MariaDB serves as the primary database backend for all OpenStack services. Install and configure MariaDB with optimizations for OpenStack workloads. Security configuration through mysql_secure_installation removes default vulnerabilities.

# Install and start MariaDB
sudo dnf install -y mariadb mariadb-server python3-PyMySQL
sudo systemctl enable mariadb.service
sudo systemctl start mariadb.service

# Secure MariaDB installation
sudo mysql_secure_installation

Network binding configuration allows remote connections from compute nodes. Edit the MariaDB configuration file to enable network access and optimize performance settings.

# Configure MariaDB for OpenStack
sudo tee /etc/mysql/mariadb.conf.d/99-openstack.cnf << EOF
[mysqld]
bind-address = 192.168.1.10
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
EOF

# Restart MariaDB
sudo systemctl restart mariadb.service

Performance tuning includes buffer pool optimization and connection limit adjustments for high-concurrency OpenStack operations.

RabbitMQ Message Queue Setup

RabbitMQ provides reliable message queuing services for OpenStack inter-service communication. Install RabbitMQ server and create dedicated users for OpenStack services.

# Install RabbitMQ
sudo dnf install -y rabbitmq-server
sudo systemctl enable rabbitmq-server.service
sudo systemctl start rabbitmq-server.service

# Create OpenStack user
sudo rabbitmqctl add_user openstack RABBIT_PASS
sudo rabbitmqctl set_permissions openstack ".*" ".*" ".*"

Management interface setup enables monitoring and troubleshooting of message queue operations. Configure clustering for high availability in production environments.

Memcached Installation

Memcached provides session caching services to improve OpenStack dashboard performance. Configure Memcached for distributed caching across multiple nodes.

# Install and configure Memcached
sudo dnf install -y memcached python3-memcached
sudo systemctl enable memcached.service
sudo systemctl start memcached.service

Memory allocation and network configuration optimize caching performance for OpenStack services.

Service Verification

Test database connectivity and message queue functionality before proceeding with OpenStack service installation. Verify that all supporting services start automatically on system boot.

# Test MariaDB connectivity
mysql -u root -p -e "SHOW DATABASES;"

# Check RabbitMQ status
sudo rabbitmqctl status

# Verify Memcached operation
systemctl status memcached

Service startup configuration through systemd ensures reliable operation during system reboots.

Installing Keystone Identity Service

Database Preparation

Create dedicated databases and user accounts for Keystone authentication services. Proper privilege assignment ensures secure access while enabling necessary operations.

# Create Keystone database
mysql -u root -p << EOF
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
FLUSH PRIVILEGES;
EXIT;
EOF

Connection string testing verifies database accessibility before service configuration.

Keystone Installation and Configuration

Install Keystone packages including Apache HTTP server and WSGI module. Configure the main Keystone configuration file with database connections and token providers.

# Install Keystone packages
sudo dnf install -y openstack-keystone httpd python3-mod_wsgi

# Configure Keystone
sudo tee -a /etc/keystone/keystone.conf << EOF
[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone

[token]
provider = fernet
EOF

Fernet key initialization provides secure token encryption for authentication operations. Database synchronization populates initial schema and configuration data.

# Initialize Keystone database
sudo -u keystone keystone-manage db_sync

# Initialize Fernet keys
sudo keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
sudo keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

Apache HTTP Server Configuration

Configure Apache virtual hosts for Keystone API endpoints. SSL/TLS configuration provides secure communication for production deployments.

# Configure Apache for Keystone
sudo ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
sudo systemctl enable httpd.service
sudo systemctl start httpd.service

WSGI module configuration optimizes performance for high-concurrent authentication requests.

Service Bootstrap and Verification

Bootstrap Keystone with initial administrative user and service endpoints. Environment variable configuration enables command-line interface access.

# Bootstrap Keystone
sudo keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
  --bootstrap-admin-url http://controller:5000/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

# Set environment variables
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3

Token testing and domain creation verify proper Keystone functionality. Create additional projects and roles for service isolation.

Installing Glance Image Service

Database and Service Setup

Create dedicated database resources for Glance image storage metadata. Register Glance services and endpoints in Keystone for authentication integration.

# Create Glance database
mysql -u root -p << EOF
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';
FLUSH PRIVILEGES;
EXIT;
EOF

# Create Glance user and service
openstack user create --domain default --password GLANCE_PASS glance
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image

Service endpoint creation enables API access for image management operations.

Glance Configuration

Install Glance packages and configure API service settings. Storage backend configuration determines where virtual machine images are stored.

# Install Glance
sudo dnf install -y openstack-glance

# Configure Glance API
sudo tee -a /etc/glance/glance-api.conf << EOF
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
EOF

Image format support includes QCOW2, RAW, and VHD formats for different virtualization platforms. Authentication configuration integrates with Keystone for secure access control.

Image Management

Start Glance services and configure automatic startup. Upload test images to verify proper operation.

# Start Glance services
sudo systemctl enable openstack-glance-api.service
sudo systemctl start openstack-glance-api.service

# Populate Glance database
sudo -u glance glance-manage db_sync

# Download and upload CirrOS test image
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
openstack image create "cirros" \
  --file cirros-0.4.0-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --public

Image verification confirms successful upload and metadata management.

Installing Nova Compute Service

Database Configuration

Nova requires multiple databases for different service components. Create separate databases for API, compute, and cell services.

# Create Nova databases
mysql -u root -p << EOF
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
FLUSH PRIVILEGES;
EXIT;
EOF

Database user creation with appropriate privileges ensures secure access across all Nova components.

Controller Node Installation

Install Nova controller packages and configure service settings. API, conductor, and scheduler services coordinate virtual machine operations.

# Install Nova controller packages
sudo dnf install -y openstack-nova-api openstack-nova-conductor \
  openstack-nova-novncproxy openstack-nova-scheduler

# Configure Nova
sudo tee -a /etc/nova/nova.conf << EOF
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
my_ip = 192.168.1.10
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api

[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova

[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS
EOF

VNC console proxy configuration enables remote access to virtual machine consoles. Placement service integration handles resource scheduling and allocation.

Database Population

Synchronize Nova databases with proper schema and initial data. Cell mapping connects compute services with database resources.

# Populate Nova databases
sudo -u nova nova-manage api_db sync
sudo -u nova nova-manage cell_v2 map_cell0
sudo -u nova nova-manage cell_v2 create_cell --name=cell1 --verbose
sudo -u nova nova-manage db sync

# Start Nova controller services
sudo systemctl enable openstack-nova-api.service \
  openstack-nova-scheduler.service openstack-nova-conductor.service \
  openstack-nova-novncproxy.service
sudo systemctl start openstack-nova-api.service \
  openstack-nova-scheduler.service openstack-nova-conductor.service \
  openstack-nova-novncproxy.service

Service verification ensures all Nova controller components operate correctly.

Compute Node Installation

Deploy Nova compute services on dedicated compute nodes. Libvirt configuration provides virtualization capabilities.

# Install Nova compute packages (on compute nodes)
sudo dnf install -y openstack-nova-compute libvirt-daemon-config-network \
  libvirt-daemon-kvm

# Configure compute node
sudo tee -a /etc/nova/nova.conf << EOF
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
my_ip = 192.168.1.20
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[libvirt]
virt_type = kvm
EOF

# Start compute services
sudo systemctl enable libvirtd.service openstack-nova-compute.service
sudo systemctl start libvirtd.service openstack-nova-compute.service

Hardware virtualization verification ensures proper KVM support. Compute host discovery registers new nodes with the controller.

Service Integration and Verification

Complete Nova deployment by verifying compute host registration and service functionality. Create initial flavors for virtual machine sizing.

# Discover compute hosts
sudo -u nova nova-manage cell_v2 discover_hosts --verbose

# Verify Nova services
openstack compute service list

# Create flavors
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 1 m1.tiny
openstack flavor create --id 2 --vcpus 1 --ram 2048 --disk 20 m1.small

Hypervisor listing confirms successful compute node integration.

Installing Neutron Networking Service

Database and Service Setup

Create Neutron database and configure Keystone integration. Network services require dedicated authentication credentials.

# Create Neutron database
mysql -u root -p << EOF
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
FLUSH PRIVILEGES;
EXIT;
EOF

# Create Neutron user and service
openstack user create --domain default --password NEUTRON_PASS neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network

Service endpoint configuration enables network API access.

Controller Node Configuration

Install Neutron controller packages and configure ML2 plugin. Open vSwitch provides software-defined networking capabilities.

# Install Neutron controller packages
sudo dnf install -y openstack-neutron openstack-neutron-ml2 \
  openstack-neutron-openvswitch ebtables

# Configure Neutron
sudo tee -a /etc/neutron/neutron.conf << EOF
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = NEUTRON_PASS

[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
EOF

ML2 plugin configuration supports multiple network types and mechanisms.

Network Types Configuration

Configure network type drivers for different deployment scenarios. Flat networks provide direct provider connectivity. VXLAN networks enable tenant isolation.

# Configure ML2 plugin
sudo tee -a /etc/neutron/plugins/ml2/ml2_conf.ini << EOF
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = true
EOF

VLAN configuration supports traditional network segmentation. Security group settings enable firewall functionality.

Open vSwitch Setup

Install and configure Open vSwitch for software-defined networking. Bridge creation connects virtual and physical networks.

# Install and start Open vSwitch
sudo dnf install -y openvswitch
sudo systemctl enable openvswitch.service
sudo systemctl start openvswitch.service

# Create bridges
sudo ovs-vsctl add-br br-provider
sudo ovs-vsctl add-port br-provider eth1

# Configure OVS agent
sudo tee -a /etc/neutron/plugins/ml2/openvswitch_agent.ini << EOF
[ovs]
bridge_mappings = provider:br-provider

[agent]
tunnel_types = vxlan
l2_population = True

[securitygroup]
enable_security_group = true
firewall_driver = openvswitch
EOF

Provider bridge configuration enables external network access. Integration bridge setup handles tenant network traffic.

Compute Node Configuration

Deploy Neutron agents on compute nodes for distributed networking. OVS configuration mirrors controller settings.

# Install Neutron compute packages (on compute nodes)
sudo dnf install -y openstack-neutron-openvswitch ebtables ipset

# Configure compute node networking
sudo systemctl enable neutron-openvswitch-agent.service
sudo systemctl start neutron-openvswitch-agent.service

Security group agent configuration provides distributed firewall capabilities.

Network Creation and Testing

Create initial networks for testing and production use. Provider networks enable external connectivity.

# Start Neutron services
sudo systemctl enable neutron-server.service \
  neutron-openvswitch-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service neutron-l3-agent.service
sudo systemctl start neutron-server.service \
  neutron-openvswitch-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service neutron-l3-agent.service

# Create provider network
openstack network create --share --external \
  --provider-physical-network provider \
  --provider-network-type flat provider

# Create subnet
openstack subnet create --network provider \
  --allocation-pool start=192.168.1.100,end=192.168.1.200 \
  --dns-nameserver 8.8.8.8 --gateway 192.168.1.1 \
  --subnet-range 192.168.1.0/24 provider

Tenant network creation provides isolated environments for different projects. Router configuration connects tenant and provider networks.

Installing Horizon Dashboard

Package Installation and Configuration

Install Horizon dashboard packages and configure Django settings. Keystone integration enables single sign-on capabilities.

# Install Horizon
sudo dnf install -y openstack-dashboard

# Configure Horizon
sudo tee -a /etc/openstack-dashboard/local_settings.py << EOF
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': 'controller:11211',
    }
}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
EOF

Session storage configuration with Memcached improves dashboard performance. API version settings ensure compatibility with OpenStack services.

Web Server Integration

Configure Apache virtual host for dashboard access. SSL certificate setup provides secure HTTPS connections.

# Configure Apache for Horizon
sudo systemctl restart httpd.service memcached.service

# Set proper SELinux contexts
sudo setsebool -P httpd_can_network_connect on

Static file serving optimization reduces page load times. Media handling configuration supports file uploads and downloads.

Access and Verification

Test web interface functionality and user authentication. Navigate through dashboard sections to verify service integration.

# Verify Horizon access
curl -I http://controller/dashboard/

Multi-domain support enables management of multiple Keystone domains. User interface walkthrough confirms proper installation.

Launching Your First Instance

Security Configuration

Create security groups with appropriate access rules. SSH access requires port 22 connectivity. ICMP rules enable ping testing.

# Create security group
openstack security group create default
openstack security group rule create --protocol tcp --dst-port 22 default
openstack security group rule create --protocol icmp default

Port-based access control provides granular network security. Custom security groups support different application requirements.

Instance Prerequisites

Generate SSH keypairs for secure instance access. Flavor selection determines virtual machine resource allocation.

# Create keypair
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

# List available resources
openstack flavor list
openstack image list
openstack network list

Network selection affects instance connectivity options. Floating IP pools provide external access capabilities.

Instance Creation

Launch virtual machine instances through command line and dashboard interfaces. Floating IP allocation enables external connectivity.

# Launch instance
openstack server create --flavor m1.nano --image cirros \
  --nic net-id=provider --security-group default \
  --key-name mykey test-instance

# Allocate floating IP
openstack floating ip create provider
openstack server add floating ip test-instance 192.168.1.150

Instance connectivity testing verifies proper network configuration. Console access provides troubleshooting capabilities.

# Test connectivity
ping 192.168.1.150
ssh cirros@192.168.1.150

Security Hardening and Best Practices

Authentication and Authorization

Implement strong password policies across all OpenStack services. Multi-factor authentication enhances security for administrative accounts. Role-based access control (RBAC) limits user permissions based on job functions.

# Configure password policies in Keystone
openstack user set --password-expires-at 2024-12-31 admin
openstack role create --description "Read-only access" reader
openstack role add --user readonly_user --project demo reader

Service account security requires dedicated credentials with minimal privileges. Regular password rotation prevents credential compromise.

Network Security

Configure firewall rules for each OpenStack service. SSL/TLS encryption protects API communications. Network segmentation isolates different traffic types.

# Configure SSL certificates
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
  -keyout /etc/ssl/private/openstack.key \
  -out /etc/ssl/certs/openstack.crt

# Update Apache SSL configuration
sudo tee -a /etc/httpd/conf.d/ssl-openstack.conf << EOF

    ServerName controller
    SSLEngine on
    SSLCertificateFile /etc/ssl/certs/openstack.crt
    SSLCertificateKeyFile /etc/ssl/private/openstack.key

EOF

VPN integration provides secure administrative access from remote locations. Network monitoring detects suspicious activities.

System Security

Regular security updates maintain protection against known vulnerabilities. Audit logging tracks user actions and system changes.

# Configure audit logging
sudo tee -a /etc/rsyslog.d/openstack.conf << EOF
# OpenStack audit logs
local0.*    /var/log/openstack/audit.log
local1.*    /var/log/openstack/keystone-audit.log
EOF

sudo systemctl restart rsyslog

File system permissions restrict unauthorized access to configuration files. Backup and disaster recovery planning ensures business continuity.

Troubleshooting Common Issues

Service Discovery Problems

Service catalog verification ensures proper endpoint registration. Authentication troubleshooting identifies token and credential issues.

# Debug service catalog
openstack catalog list
openstack endpoint list
openstack token issue

# Check service status
systemctl status openstack-keystone
systemctl status openstack-nova-api
systemctl status openstack-neutron-server

Authorization problems often stem from incorrect role assignments or project memberships. Service logs provide detailed error information.

Network Connectivity Issues

Open vSwitch bridge status indicates network infrastructure health. Network namespace investigation helps isolate connectivity problems.

# Check OVS bridges and ports
sudo ovs-vsctl show
sudo ovs-vsctl list-ports br-provider

# Investigate network namespaces
sudo ip netns list
sudo ip netns exec qdhcp- ip addr show

Routing table verification ensures proper packet forwarding. Interface status checks reveal physical layer problems.

Instance Launch Failures

Compute service status affects virtual machine creation. Resource allocation problems prevent instance scheduling.

# Check compute services
openstack compute service list
openstack hypervisor list

# Investigate resource usage
openstack limits show --absolute
openstack quota show

Image and flavor compatibility issues cause launch failures. Log analysis reveals specific error conditions.

Log Analysis and Monitoring

Systemd journal examination provides comprehensive service information. OpenStack service logs contain detailed operational data.

# Check service logs
sudo journalctl -u openstack-nova-api
sudo journalctl -u openstack-neutron-server
sudo tail -f /var/log/nova/nova-api.log
sudo tail -f /var/log/neutron/server.log

Performance monitoring identifies resource bottlenecks. Alerting systems notify administrators of critical issues.

Congratulations! You have successfully installed OpenStack. Thanks for using this tutorial for installing OpenStack on Rocky Linux 10 system. For additional help or useful information, we recommend you check the official OpenStack website.

VPS Manage Service Offer
If you don’t have time to do all of this stuff, or if this is not your area of expertise, we offer a service to do “VPS Manage Service Offer”, starting from $10 (Paypal payment). Please contact us to get the best deal!

r00t

r00t is an experienced Linux enthusiast and technical writer with a passion for open-source software. With years of hands-on experience in various Linux distributions, r00t has developed a deep understanding of the Linux ecosystem and its powerful tools. He holds certifications in SCE and has contributed to several open-source projects. r00t is dedicated to sharing her knowledge and expertise through well-researched and informative articles, helping others navigate the world of Linux with confidence.
Back to top button