
Varnish Cache is a powerful HTTP accelerator designed to dramatically improve website performance by caching frequently accessed content in memory. Acting as a reverse proxy between your web server and visitors, Varnish can reduce page load times by up to 90% while significantly decreasing server resource consumption. This comprehensive guide walks you through installing and configuring Varnish Cache 6.0 LTS on CentOS 7 and CentOS 8 systems with Apache as the backend web server, ensuring your high-traffic websites deliver content at lightning speed.
What is Varnish Cache?
Varnish Cache is a web application accelerator specifically focused on HTTP caching and content delivery optimization. Unlike traditional caching solutions, Varnish stores frequently requested web pages, images, and other static assets directly in RAM, enabling incredibly fast retrieval speeds measured in microseconds rather than milliseconds.
The architecture is elegantly simple yet highly effective. Varnish sits in front of your web server, listening on the standard HTTP port 80. When visitors request content, Varnish intercepts these requests. If the content exists in its cache (a cache hit), Varnish serves it immediately without touching your backend server. When content isn’t cached (a cache miss), Varnish fetches it from your web server, serves it to the visitor, and stores a copy for future requests.
This caching mechanism dramatically reduces backend server load. Your Apache or Nginx server only processes unique requests and cache misses, while Varnish handles the repetitive traffic that typically overwhelms web servers during traffic spikes.
Varnish employs a powerful configuration language called VCL (Varnish Configuration Language) that allows fine-grained control over caching behavior. You can define exactly which content gets cached, set expiration times, implement custom cache invalidation rules, and manipulate HTTP headers. This flexibility makes Varnish suitable for everything from simple blogs to complex e-commerce platforms serving millions of daily visitors.
The performance benefits extend beyond raw speed. Faster page loads improve user experience, reduce bounce rates, and positively impact search engine rankings. Google considers page speed a ranking factor, making Varnish an essential tool for SEO optimization.
Why Use Varnish Cache on CentOS?
CentOS and Red Hat Enterprise Linux (RHEL) represent the gold standard for enterprise server deployments. Their stability, long-term support, and enterprise-grade security features make them ideal platforms for production web applications requiring maximum uptime and reliability.
Varnish Cache integrates seamlessly with CentOS environments. The official Varnish 6.0 LTS (Long Term Support) version receives regular security updates, bug fixes, and performance improvements from Varnish Software, ensuring your caching layer remains secure and efficient.
The cost-effectiveness is substantial. By dramatically reducing server load, Varnish allows existing hardware to handle 10-20 times more concurrent visitors. Many organizations avoid expensive infrastructure upgrades simply by implementing proper caching strategies. A single server with Varnish can often replace multiple load-balanced servers without caching.
Scalability becomes effortless. During traffic spikes from viral content, marketing campaigns, or seasonal shopping surges, Varnish absorbs the load by serving cached content without increasing backend server stress. This protection mechanism prevents server crashes and maintains consistent response times even under extreme load.
CentOS’s package management system simplifies Varnish installation and maintenance. The EPEL repository provides dependencies, while official Varnish repositories ensure you’re running the latest stable version with security patches.
Prerequisites
Before installing Varnish Cache, ensure your system meets these requirements and you have the necessary access and resources.
You need a CentOS 7 or CentOS 8/RHEL 8 server with root or sudo privileges. All commands in this tutorial require administrative access, so verify your permissions before proceeding. Check your CentOS version with cat /etc/centos-release to confirm compatibility.
A functioning web server must be installed and operational. This guide focuses on Apache integration, but the principles apply equally to Nginx. Your website should be accessible and serving content before adding Varnish to the stack.
Memory requirements depend on your caching needs. Varnish stores cached content in RAM, so allocate sufficient memory for optimal performance. A minimum of 2GB RAM is recommended for small websites, while high-traffic sites may require 8GB or more dedicated to caching. Calculate approximately 50-80% of available server memory for Varnish caching.
Firewall access is essential. You’ll need to open ports 80 (HTTP traffic through Varnish) and 8080 (backend Apache server). Understanding basic firewall configuration with firewalld is important for production security.
Command line proficiency is assumed. You should be comfortable using text editors like nano or vim, executing commands with sudo, and navigating the Linux filesystem. Basic knowledge of systemd service management helps troubleshooting.
Finally, create a complete system backup before making configuration changes. While Varnish installation is generally safe, maintaining backups protects against unexpected issues and allows quick rollback if needed.
Step 1: Install and Configure Apache Web Server
Apache installation and configuration forms the foundation for Varnish integration. Even if Apache is already installed, you’ll need to reconfigure it to work alongside Varnish.
Install Apache using the yum package manager:
sudo yum install -y httpd
This command downloads and installs Apache along with required dependencies. The -y flag automatically confirms installation prompts, streamlining the process.
Start the Apache service and enable it to launch automatically on system boot:
sudo systemctl start httpd
sudo systemctl enable httpd
Verify Apache is running correctly:
sudo systemctl status httpd
You should see output indicating the service is active and running. If errors appear, review system logs with journalctl -u httpd for diagnostic information.
Now comes the critical configuration change. By default, Apache listens on port 80 for incoming HTTP requests. Since Varnish will assume this role, you must reconfigure Apache to listen on an alternate port. Port 8080 is the standard choice for Varnish backend servers.
Edit the Apache configuration file:
sudo nano /etc/httpd/conf/httpd.conf
Locate the line containing Listen 80 (typically near the top of the file). Change this to:
Listen 127.0.0.1:8080
This configuration binds Apache specifically to localhost on port 8080, meaning it only accepts connections from the local system (Varnish) rather than direct external access.
Test your configuration for syntax errors:
sudo apachectl configtest
If the output shows “Syntax OK,” proceed to restart Apache:
sudo systemctl restart httpd
Verify Apache is now listening on the correct port:
sudo netstat -tulpn | grep :8080
Or using the modern ss command:
sudo ss -tulpn | grep :8080
You should see Apache (httpd) bound to port 8080. This confirms your web server is ready for Varnish integration.
Step 2: Install EPEL Repository
The EPEL (Extra Packages for Enterprise Linux) repository provides additional packages not included in the default CentOS repositories. Varnish requires several dependencies available exclusively through EPEL.
For CentOS systems, installation is straightforward:
sudo yum install epel-release
This command adds the EPEL repository configuration to your system, enabling access to thousands of additional packages maintained by the Fedora community but compatible with Enterprise Linux distributions.
For Red Hat Enterprise Linux users, EPEL isn’t available in default repositories. You must install it from a remote RPM package. First, source your operating system release information:
. /etc/os-release
This command makes system version variables available to your shell session. Then install EPEL using:
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-${VERSION_ID%%.*}.noarch.rpm
The VERSION_ID variable automatically inserts your RHEL version number, ensuring you download the correct EPEL package.
Verify EPEL repository installation:
yum repolist
You should see “epel” listed among available repositories. This confirmation means your system can now access EPEL packages.
Install additional utilities that streamline repository management:
sudo yum install -y pygpgme yum-utils
These tools provide enhanced GPG signature verification and repository manipulation capabilities, improving security and functionality.
Step 3: Disable Default Varnish Module (CentOS 8/RHEL 8)
CentOS 8 and RHEL 8 introduce a modular repository system that includes a default Varnish module. However, this module contains an older Varnish version rather than the recommended 6.0 LTS release. You must disable this default module before installing the official Varnish package.
Execute the following command on CentOS 8 or RHEL 8 systems:
sudo dnf module disable varnish
This prevents conflicts between the default module and the official Varnish 6.0 LTS packages you’ll install from the Varnish Software repository.
The distinction is important. Varnish 6.0 LTS receives long-term support with regular security updates, bug fixes, and feature backports from newer versions. This makes it significantly more stable and secure than older versions included in default repositories.
CentOS 7 users can skip this step entirely. The modular repository system doesn’t exist in CentOS 7, so no module conflicts occur.
Step 4: Add Varnish Cache Repository
The official Varnish Cache repository hosted on packagecloud.io provides the latest stable Varnish 6.0 LTS packages. Adding this repository ensures you receive updates directly from Varnish Software.
Create the repository configuration file:
sudo tee /etc/yum.repos.d/varnishcache_varnish60lts.repo > /dev/null <<-EOF
[varnishcache_varnish60lts]
name=varnishcache_varnish60lts
baseurl=https://packagecloud.io/varnishcache/varnish60lts/el/\${VERSION_ID%%.*}/\$(arch)
repo_gpgcheck=0
gpgcheck=0
enabled=1
gpgkey=https://packagecloud.io/varnishcache/varnish60lts/gpgkey
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300
EOF
This command creates a new repository configuration with several important parameters. The baseurl uses variables that automatically adjust based on your system architecture and Enterprise Linux version. For example, a 64-bit x86 system running CentOS 8 resolves to the appropriate package directory.
The repository configuration includes GPG key verification for package authenticity, SSL certificate verification for secure downloads, and metadata expiration settings that balance freshness with performance.
Update your yum cache to recognize the new repository:
sudo yum makecache
Verify the repository addition:
yum repolist
You should see “varnishcache_varnish60lts” listed among available repositories, confirming successful configuration.
Step 5: Install Varnish Cache
With repositories configured correctly, installing Varnish becomes a simple single command:
sudo yum install varnish
The package manager downloads Varnish along with all required dependencies, displaying a summary before installation. Review the package list and confirm when prompted.
Installation typically completes within a minute, depending on your internet connection speed. Once finished, verify the installation:
varnishd -V
This displays Varnish version information, compilation details, and feature support. You should see output indicating Varnish 6.0 LTS along with build parameters and supported features.
Understanding Varnish file locations helps with configuration and troubleshooting:
- Main VCL configuration file:
/etc/varnish/default.vcl - Systemd service file:
/lib/systemd/system/varnish.service - Runtime parameters: Configured through systemd overrides
- Log files: Managed through varnishlog and varnishncsa utilities
These files control different aspects of Varnish behavior. The VCL file defines caching logic and backend server configuration, while the systemd service file controls runtime parameters like memory allocation and listening ports.
Step 6: Configure Varnish Cache
Varnish configuration involves two main components: runtime parameters controlled through the systemd service file, and caching behavior defined in the VCL (Varnish Configuration Language) file.
Configuring Runtime Parameters
Edit the Varnish service file to modify runtime parameters:
sudo systemctl edit --full varnish
This command opens the complete service file in your default text editor. Locate the ExecStart line, which typically looks like:
ExecStart=/usr/sbin/varnishd -a :6081 -s malloc,256m -f /etc/varnish/default.vcl
Modify this line to configure Varnish for production use:
ExecStart=/usr/sbin/varnishd \
-a :80 \
-a localhost:8443,PROXY \
-p feature=+http2 \
-s malloc,2g \
-f /etc/varnish/default.vcl
Let’s break down these parameters:
-a :80changes the listening port from 6081 to 80, making Varnish answer standard HTTP requests-a localhost:8443,PROXYadds PROXY protocol support for SSL termination proxies-p feature=+http2enables HTTP/2 protocol support for improved performance-s malloc,2gincreases cache storage from 256MB to 2GB (adjust based on available RAM)
The memory allocation parameter deserves careful consideration. Allocate 50-80% of available server memory to Varnish for optimal caching performance. A server with 8GB RAM might allocate 4-6GB to Varnish, leaving sufficient memory for the operating system and backend applications.
Save the service file and exit the editor.
Configuring Backend Server
Edit the VCL configuration file to define your backend web server:
sudo nano /etc/varnish/default.vcl
Locate the backend configuration section near the top of the file. It looks similar to:
backend default {
.host = "127.0.0.1";
.port = "8080";
}
This configuration tells Varnish where to fetch content when cache misses occur. The host address 127.0.0.1 (localhost) and port 8080 correspond to the Apache configuration you modified earlier.
For most installations, this default configuration works perfectly. However, VCL supports advanced features like:
- Multiple backend servers for load balancing
- Backend health checks and automatic failover
- Custom cache key generation
- Header manipulation for improved caching
- Grace mode for serving stale content during backend failures
Test your VCL configuration for syntax errors:
sudo varnishd -C -f /etc/varnish/default.vcl
This command compiles the VCL file and displays any syntax errors. If the output shows compiled VCL code without errors, your configuration is valid.
Step 7: Configure Firewall Rules
Firewall configuration ensures Varnish is accessible from the internet while maintaining security. CentOS uses firewalld by default for firewall management.
Check firewalld status:
sudo systemctl status firewalld
If firewalld is active, configure the necessary rules. Open port 80 for HTTP traffic:
sudo firewall-cmd --permanent --add-service=http
Alternatively, open the specific port:
sudo firewall-cmd --permanent --add-port=80/tcp
Reload firewall rules to apply changes:
sudo firewall-cmd --reload
Verify the configuration:
sudo firewall-cmd --list-all
You should see HTTP service or port 80/tcp listed under allowed services/ports.
Port 8080 (backend Apache) should NOT be exposed to the internet. It should only accept localhost connections, as configured in Apache. This security measure prevents direct backend access, forcing all traffic through Varnish’s caching layer.
SELinux Considerations
SELinux (Security-Enhanced Linux) may interfere with Varnish operation. Check SELinux status:
getenforce
If SELinux is enforcing, you might need to adjust policies. However, default SELinux policies typically allow Varnish operation. Only modify SELinux settings if you encounter permission-related errors in Varnish logs.
Step 8: Enable and Start Varnish Service
With configuration complete, enable Varnish to start automatically on system boot:
sudo systemctl enable varnish
Reload the systemd daemon to recognize configuration changes:
sudo systemctl daemon-reload
Start the Varnish service:
sudo systemctl start varnish
You can combine enabling and starting with a single command:
sudo systemctl enable --now varnish
Verify Varnish is running correctly:
sudo systemctl status varnish
The output should show “active (running)” in green, indicating successful service startup. If you see “failed” or “inactive,” review error messages in the status output.
Confirm Varnish is listening on port 80:
sudo ss -tulpn | grep :80
You should see varnishd bound to port 80, confirming it’s ready to handle incoming HTTP requests.
Check running Varnish processes:
ps aux | grep varnish
Varnish runs two processes: a parent manager process and a child cache process. This architecture provides stability and enables seamless configuration reloads without dropping connections.
Step 9: Test Varnish Cache Installation
Testing confirms Varnish is caching content correctly and delivering performance improvements.
Testing with curl
Use curl to examine HTTP response headers:
curl -I http://your-server-ip
Look for Varnish-specific headers in the response:
X-Varnish: Contains transaction IDs indicating Varnish handlingVia: 1.1 varnish: Confirms the response passed through VarnishAge: Shows how long the content has been cached (in seconds)
On the first request, you’ll see a cache miss. Make the same request again:
curl -I http://your-server-ip
The second response should show:
- A higher
Agevalue - Different
X-Varnishtransaction IDs - Faster response time
This behavior confirms content caching is working.
Using varnishlog
Varnishlog provides real-time transaction monitoring. Open a terminal and run:
sudo varnishlog
Generate traffic to your website, and varnishlog displays detailed information about each request, including backend fetches, cache hits, and response headers.
Filter varnishlog output for specific URLs:
sudo varnishlog -q "ReqURL eq '/'"
This shows only requests to your homepage, making analysis easier.
Monitoring with varnishstat
Varnishstat displays real-time cache statistics:
varnishstat
Key metrics to monitor include:
- Cache hit ratio: Percentage of requests served from cache (aim for 80%+)
- Cache hits vs misses: Numerical comparison of cached vs uncached requests
- Backend connections: Number of active backend connections
- Client requests: Total requests received by Varnish
Press q to exit varnishstat. For one-time statistics, use:
varnishstat -1
Browser Testing
Access your website in a browser and use developer tools (F12) to inspect network requests. Click on any resource and examine response headers. You should see Varnish indicators like “Via: varnish” in the headers tab.
Measure page load time before and after Varnish implementation. Most websites experience 50-90% load time reduction once caching is properly configured.
Common Varnish Configuration Options
Advanced Varnish configuration unlocks additional performance and functionality.
Cache Storage Backends
Varnish supports multiple storage backends:
- malloc: Stores cache in RAM (fastest, but lost on restart)
- file: Stores cache on disk (persistent across restarts, slower than malloc)
For most use cases, malloc provides superior performance. Configure in the systemd service file:
-s malloc,2g
For persistent caching, use file storage:
-s file,/var/lib/varnish/varnish_storage.bin,10g
Cache TTL Configuration
Control cache expiration times in VCL. Add to your default.vcl:
sub vcl_backend_response {
if (bereq.url ~ "^/static/") {
set beresp.ttl = 7d;
}
if (bereq.url ~ "\.(jpg|jpeg|png|gif|css|js)$") {
set beresp.ttl = 1h;
}
}
This configuration caches static directory content for 7 days and image/CSS/JavaScript files for 1 hour.
Excluding Content from Cache
Prevent caching of dynamic or user-specific content:
sub vcl_recv {
if (req.method == "POST" || req.url ~ "^/admin") {
return (pass);
}
if (req.http.Cookie ~ "logged_in") {
return (pass);
}
}
This excludes POST requests, admin areas, and authenticated user requests from caching.
Backend Health Checks
Configure automatic backend health monitoring:
backend default {
.host = "127.0.0.1";
.port = "8080";
.probe = {
.url = "/";
.interval = 5s;
.timeout = 1s;
.window = 5;
.threshold = 3;
}
}
Varnish probes the backend every 5 seconds and marks it unhealthy if 3 out of 5 probes fail.
Varnish Cache Performance Optimization
Optimizing Varnish performance requires monitoring, analysis, and iterative tuning.
Monitoring Cache Hit Ratio
Cache hit ratio indicates caching effectiveness. Calculate it from varnishstat:
Cache hit ratio = cache_hit / (cache_hit + cache_miss)
Target a hit ratio above 80%. Lower ratios indicate:
- Insufficient cache TTL values
- Too many unique URLs (query parameters)
- Content being marked uncacheable
- Inadequate cache storage allocation
Memory Optimization
Monitor memory usage with:
free -h
If Varnish’s malloc storage approaches the allocated limit, consider:
- Increasing memory allocation
- Implementing cache eviction strategies
- Reducing cache TTL for large objects
VCL Performance Tuning
Optimize VCL logic for faster cache lookups:
sub vcl_recv {
# Remove unnecessary query parameters
if (req.url ~ "\?(utm_|fbclid|gclid)") {
set req.url = regsub(req.url, "\?.*$", "");
}
# Normalize accept-encoding
if (req.http.Accept-Encoding) {
if (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} else {
unset req.http.Accept-Encoding;
}
}
}
This normalization improves cache efficiency by reducing cache key variations.
Thread Pool Configuration
For high-traffic sites, optimize thread pools:
-p thread_pool_min=100
-p thread_pool_max=5000
-p thread_pool_timeout=300
Add these parameters to your systemd service file ExecStart line.
Troubleshooting Common Issues
Varnish Service Fails to Start
Check service status and logs:
sudo systemctl status varnish -l
sudo journalctl -u varnish -n 50
Common causes include:
- VCL syntax errors (test with
varnishd -C -f /etc/varnish/default.vcl) - Port conflicts (another service using port 80)
- Insufficient memory allocation
- Incorrect file permissions
Low Cache Hit Ratio
Analyze why content isn’t being cached:
sudo varnishlog -q "VCL_call eq 'PASS'"
This shows requests bypassing cache. Common reasons include:
- Cookies preventing caching
- Dynamic content without proper cache headers
- POST requests
- Backend sending Cache-Control: no-cache headers
Backend Connection Failures
Test backend connectivity:
curl http://127.0.0.1:8080
If this fails, verify:
- Apache is running on port 8080
- Apache configuration is correct
- No firewall blocking localhost connections
- Backend server isn’t overloaded
Memory Allocation Errors
If Varnish can’t allocate requested memory:
- Reduce malloc size in systemd service file
- Close other memory-intensive applications
- Upgrade server RAM
- Switch to file-based storage for larger caches
VCL Compilation Errors
Syntax errors prevent VCL loading. Common mistakes include:
- Missing semicolons
- Incorrect string escaping
- Undefined variables
- Invalid regular expressions
Always test VCL before reloading:
sudo varnishd -C -f /etc/varnish/default.vcl
Security Considerations
Securing your Varnish installation protects against attacks and unauthorized access.
Restricting Admin Access
Varnish admin interface (varnishadm) provides direct cache management. Restrict access to localhost only and protect with a secret file:
-S /etc/varnish/secret
Ensure proper file permissions:
sudo chmod 600 /etc/varnish/secret
SSL/TLS Implementation
Varnish doesn’t handle SSL/TLS encryption natively. Implement SSL termination using:
- Nginx reverse proxy: Place Nginx in front of Varnish for SSL handling
- Hitch: Dedicated SSL/TLS termination proxy designed for Varnish
- Cloud CDN: Use services like Cloudflare for SSL termination
This architecture becomes: Client → SSL Termination → Varnish → Backend
Preventing Cache Poisoning
Sanitize headers to prevent cache poisoning attacks:
sub vcl_recv {
# Remove potentially dangerous headers
unset req.http.X-Forwarded-For;
unset req.http.X-Real-IP;
set req.http.X-Forwarded-For = client.ip;
}
Regular Updates
Keep Varnish updated with security patches:
sudo yum update varnish
Subscribe to Varnish security announcements to stay informed about vulnerabilities.
Congratulations! You have successfully. Thanks for using this tutorial for installing Varnish Cache on your CentOS system. For additional help or useful information, we recommend you check the official Varnish website.