How To Install Varnish on openSUSE

Varnish Cache stands as one of the most powerful HTTP accelerators available for modern web infrastructure. This high-performance reverse proxy caching server dramatically improves website speed by storing frequently accessed content in memory, reducing backend server load and delivering content to visitors at lightning speed. When a visitor requests a page, Varnish serves the cached version from RAM rather than forcing the backend server to regenerate the same content repeatedly.
The difference between serving content from memory versus processing it through your web server backend can be substantial. Websites using Varnish often see response times drop from hundreds of milliseconds to single-digit milliseconds. This performance boost translates directly into better user experience, improved search engine rankings, and the ability to handle significantly more concurrent visitors without additional hardware.
This comprehensive guide walks through installing and configuring Varnish Cache on openSUSE systems. Whether running openSUSE Tumbleweed or Leap, the process remains straightforward when following these detailed instructions. By the end of this tutorial, Varnish will be caching content, accelerating website delivery, and reducing server resource consumption substantially.
Prerequisites
Before beginning the Varnish installation process, certain requirements must be met to ensure a smooth setup experience.
System Requirements
openSUSE compatibility spans multiple versions including Tumbleweed (rolling release), Leap 15.6, Leap 15.7, and Leap 16.0. The system should have at least 2GB of RAM, though 4GB or more is recommended for production environments since Varnish stores cached content in memory. A modern multi-core processor will handle concurrent connections more efficiently, but even modest systems benefit from Varnish’s caching capabilities.
Root access or sudo privileges are essential. All commands in this guide require administrative permissions to install packages, modify system configurations, and manage services.
Existing Infrastructure
A functioning web server must already be installed and operational before adding Varnish. Apache or Nginx typically serve as the backend application server. The web server should currently be running on its default port, which will be changed during configuration to allow Varnish to listen on the standard HTTP port 80.
Basic familiarity with Linux command-line operations proves invaluable. Understanding how to use text editors like nano or vi, navigate the filesystem, and interpret command output makes the installation process significantly smoother.
Network Considerations
Port 80 must be available for Varnish to bind to, as this is where HTTP traffic arrives. The backend web server will be reconfigured to listen on an alternative port, typically 8080. Firewall configurations need adjustment to allow traffic on these ports. Understanding basic networking concepts and firewall management helps prevent connectivity issues after installation.
Understanding Varnish Versions
Varnish releases follow two distinct tracks that serve different deployment scenarios.
Bleeding Edge vs. LTS Releases
Development versions receive frequent updates with the latest features and improvements. These releases suit testing environments and organizations wanting cutting-edge capabilities. However, the support lifecycle remains shorter, requiring more frequent upgrades.
Long Term Support (LTS) releases provide stability and extended maintenance periods. Production environments benefit most from LTS versions since they receive security patches and critical bug fixes for extended periods without introducing disruptive changes. The trade-off involves waiting longer for new features to reach LTS status.
Version Numbering Explained
Varnish uses semantic versioning where version numbers reveal the release type. LTS releases typically carry even minor version numbers. Development branches use odd numbers. For example, version 7.6 represents an LTS release while 7.7 indicates a development branch.
Release Cycle and EOL Considerations
Each LTS version receives support for approximately six years from release. Understanding End-of-Life (EOL) dates ensures the chosen version will receive security updates throughout the deployment’s expected lifespan. Planning version upgrades before EOL prevents running unsupported software vulnerable to security issues.
Step 1: Updating System Packages
System updates establish a solid foundation before introducing new software. Open a terminal and execute the following commands to ensure all existing packages are current.
Refresh the package repository metadata first:
sudo zypper refresh
This command contacts all configured repositories and downloads the latest package lists. Next, update installed packages to their newest versions:
sudo zypper update
Review the list of packages that will be updated. Zypper displays a summary before proceeding. Type ‘y’ and press Enter to confirm the updates.
The update process may take several minutes depending on how many packages require updating and network speed. A fully updated system reduces compatibility issues and ensures security patches are applied before adding new services.
If dependency conflicts arise during updates, zypper typically suggests solutions. Review these carefully before accepting automated resolutions.
Step 2: Adding Varnish Repository
openSUSE distributes Varnish through the server:http community repository rather than the default system repositories. This provides access to well-maintained, up-to-date Varnish packages.
Understanding openSUSE Package Repositories
The openSUSE Build Service hosts thousands of software packages organized into project repositories. The server:http project specifically focuses on HTTP server software and related tools, making it the authoritative source for Varnish on openSUSE systems.
Adding Repository for openSUSE Tumbleweed
For Tumbleweed installations, add the repository with this command:
sudo zypper addrepo https://download.opensuse.org/repositories/server:http/openSUSE_Tumbleweed/server:http.repo
The system adds the repository configuration and may prompt to trust the repository’s GPG key on first use.
Adding Repository for openSUSE Leap 15.6
Leap 15.6 users should add the version-specific repository:
sudo zypper addrepo https://download.opensuse.org/repositories/server:http/15.6/server:http.repo
Adding Repository for openSUSE Leap 15.7
For Leap 15.7 systems:
sudo zypper addrepo https://download.opensuse.org/repositories/server:http/15.7/server:http.repo
Adding Repository for openSUSE Leap 16.0
Leap 16.0 installations require:
sudo zypper addrepo https://download.opensuse.org/repositories/server:http/16.0/server:http.repo
After adding the repository, refresh the package cache to make the new packages available:
sudo zypper refresh
Should repository connection issues occur, verify network connectivity and check that the URL matches the exact openSUSE version. The openSUSE Software Portal provides alternative repository URLs if needed.
Step 3: Installing Varnish Cache
With the repository configured, installing Varnish becomes straightforward.
Installation Command
Execute the installation command:
sudo zypper install varnish
Zypper resolves dependencies automatically and displays the list of packages to be installed. The installation includes the main Varnish daemon, configuration files, command-line utilities, and documentation. Review the package list and confirm installation by typing ‘y’ when prompted.
The package manager may request permission to trust the repository’s GPG key if this is the first installation from server:http. Accepting the key allows installation to proceed.
Verifying Installation
Confirm Varnish installed successfully by checking its version:
varnishd -V
This displays the Varnish version number, VCL compiler information, and build configuration. A successful output confirms the binary is accessible and executable.
Query package information for additional details:
zypper info varnish
This command shows the installed version, repository source, package size, and description.
Understanding Installed Components
Varnish installation places files in several locations. Configuration files reside in /etc/varnish/, with default.vcl being the primary configuration file. Systemd service definitions live in /usr/lib/systemd/system/. The /var/log/ directory will contain Varnish logs once the service starts.
Several command-line utilities come with Varnish. The varnishd daemon provides the core caching functionality. varnishlog displays detailed request logs. varnishstat shows real-time cache statistics. varnishtest allows testing VCL configurations. varnishadm provides administrative control over the running daemon.
Step 4: Configuring Backend Web Server
Varnish functions as a reverse proxy sitting in front of the backend web server. For this architecture to work properly, the backend must move to a non-standard port while Varnish claims port 80.
Why Backend Configuration is Necessary
HTTP traffic arrives on port 80 by default. Visitors expect to access websites without specifying port numbers. Varnish needs to listen on port 80 to receive incoming requests. After checking its cache, Varnish forwards cache misses to the backend server. The backend must therefore listen on a different port—convention dictates port 8080.
Configuring Apache Backend
For Apache web servers, modify the main configuration file. Locate and edit the Apache configuration:
sudo nano /etc/apache2/listen.conf
Find the line reading Listen 80 and change it to:
Listen 8080
If virtual hosts are configured, edit each VirtualHost directive to match:
<VirtualHost *:8080>
Save the file and test the configuration for syntax errors:
sudo apachectl configtest
A “Syntax OK” message indicates the configuration is valid. Restart Apache to apply changes:
sudo systemctl restart apache2
Configuring Nginx Backend
Nginx users need to modify server block configurations. The main configuration file location varies, but typically resides in /etc/nginx/nginx.conf or /etc/nginx/conf.d/default.conf:
sudo nano /etc/nginx/nginx.conf
Locate server blocks and change the listen directive:
server {
listen 8080;
server_name example.com;
# remaining configuration
}
Test the Nginx configuration:
sudo nginx -t
Successful validation shows “syntax is ok” and “test is successful”. Restart Nginx:
sudo systemctl restart nginx
Verifying Backend Server
Confirm the backend web server responds on port 8080:
curl http://localhost:8080
The command should return HTML content from the web server. If errors occur, check that the web server service is running and listening on the correct port using netstat -tulpn | grep 8080.
Step 5: Configuring Varnish VCL File
Varnish Configuration Language (VCL) defines caching behavior, backend connections, and request handling logic. Understanding VCL basics enables customizing Varnish for specific requirements.
Understanding VCL (Varnish Configuration Language)
VCL is a domain-specific language designed specifically for Varnish. It resembles C in syntax but focuses exclusively on HTTP request and response manipulation. When Varnish loads a VCL file, it compiles the configuration into C code for maximum performance. This compilation step means syntax errors prevent the service from starting.
Editing the Default VCL File
The default VCL file resides at /etc/varnish/default.vcl. Open it with a text editor:
sudo nano /etc/varnish/default.vcl
Alternatively, use vi:
sudo vi /etc/varnish/default.vcl
Configuring Backend Definition
Locate the backend configuration section near the top of the file. It should look similar to:
# backend default {
# .host = "127.0.0.1";
# .port = "8080";
# }
Remove the comment characters to activate the backend definition:
backend default {
.host = "127.0.0.1";
.port = "8080";
}
This configuration tells Varnish to forward cache misses to the web server running on localhost port 8080.
Backend Configuration Options
Advanced backend configurations support additional parameters for fine-tuning connection behavior:
backend default {
.host = "127.0.0.1";
.port = "8080";
.connect_timeout = 5s;
.first_byte_timeout = 60s;
.between_bytes_timeout = 10s;
.max_connections = 300;
}
The .connect_timeout parameter sets how long Varnish waits when establishing a connection to the backend. .first_byte_timeout defines the maximum wait time for the backend to begin responding. .between_bytes_timeout specifies timeout for ongoing data transfer. .max_connections limits concurrent connections to prevent overwhelming the backend.
Basic VCL Syntax Rules
VCL uses C-style syntax with semicolons terminating statements. Comments use C++ style with // for single lines or /* */ for blocks. Indentation and whitespace don’t affect functionality but improve readability. Each subroutine defines behavior at different stages of request processing.
Save the VCL file after making changes. Before restarting Varnish, validate the VCL syntax by attempting to compile it:
sudo varnishd -C -f /etc/varnish/default.vcl
This command compiles the VCL and outputs the generated C code. Compilation errors display clearly, indicating the line number and nature of syntax issues.
Step 6: Configuring Varnish Systemd Service
Modern Linux distributions use systemd for service management. Varnish’s systemd unit file requires modification to change listening ports, cache size, and other runtime parameters.
Understanding Systemd Service Management
Systemd unit files define how services start, stop, and behave. The default Varnish service file contains conservative settings suitable for testing but not optimized for production workloads. Customization happens through systemd overrides or editing the unit file directly.
Viewing Default Varnish Service Configuration
Examine the current service configuration:
systemctl cat varnish
This displays the unit file contents, showing the default ExecStart line that launches varnishd with specific parameters.
Editing Varnish Service Configuration
Create a systemd override using the built-in editor:
sudo systemctl edit varnish
This opens an editor with a blank override file. Alternatively, manually create the override directory:
sudo mkdir -p /etc/systemd/system/varnish.service.d/
sudo nano /etc/systemd/system/varnish.service.d/override.conf
Key Configuration Parameters to Modify
The ExecStart line requires complete replacement in overrides. First, clear the existing ExecStart, then define the new one:
[Service]
ExecStart=
ExecStart=/usr/sbin/varnishd -a :80 -f /etc/varnish/default.vcl -s malloc,2G -T 127.0.0.1:6082
Breaking down these parameters:
-a :80makes Varnish listen on port 80 for HTTP traffic-f /etc/varnish/default.vclspecifies the VCL configuration file path-s malloc,2Gallocates 2GB of RAM for cache storage using malloc-T 127.0.0.1:6082enables the admin interface on localhost port 6082
Memory Allocation Strategies
The -s parameter determines cache storage method and size. The malloc storage type keeps everything in memory for maximum performance. Specify size as megabytes (256m) or gigabytes (2G). Choose cache size based on available RAM, leaving enough for the operating system and other services.
File-based storage uses disk: -s file,/var/cache/varnish/varnish_storage.bin,10G. This option supports larger cache sizes but performs slower than memory storage. For most scenarios, malloc provides better performance.
Reloading Systemd After Changes
Systemd must reload its configuration after modifying unit files:
sudo systemctl daemon-reload
This command makes systemd aware of the changes without affecting running services. Skipping this step means changes won’t take effect.
Step 7: Enabling and Starting Varnish Service
With configuration complete, enable and start the Varnish service to begin caching.
Enabling Varnish for Auto-Start
Enable Varnish to start automatically on system boot:
sudo systemctl enable varnish
Systemd creates the necessary symlinks to ensure Varnish starts during the boot process. This proves essential for production servers that may reboot for maintenance or updates.
Starting Varnish Service
Launch the Varnish service:
sudo systemctl start varnish
These two commands can be combined:
sudo systemctl enable --now varnish
The --now flag starts the service immediately while also enabling it for future boots.
Checking Varnish Service Status
Verify Varnish is running correctly:
systemctl status varnish
Look for “active (running)” in green text. The output displays process IDs for the Varnish processes and recent log entries. Any errors preventing startup appear here with diagnostic information.
Understanding Varnish Processes
Varnish runs multiple processes. The master process handles management tasks and monitors the worker process. The worker process, running as the vcache user for security, handles all caching and request processing. This separation means if the worker crashes, the master can restart it automatically without full service interruption.
Step 8: Verifying Varnish Installation
Thorough verification ensures Varnish is functioning correctly and actually caching content.
Checking Listening Ports
Confirm Varnish listens on the expected ports:
sudo netstat -tulpn | grep varnish
Or using the newer ss command:
sudo ss -tulpn | grep varnish
The output should show Varnish listening on 0.0.0.0:80 for HTTP traffic and 127.0.0.1:6082 for the admin interface.
Testing Varnish Functionality
Request a page and examine response headers:
curl -I http://localhost
The response includes several headers. Look for Varnish-specific headers like X-Varnish which contains cache object identifiers. The Via header typically includes “varnish” along with the version number. The Age header shows how many seconds the cached object has been stored.
Testing from External Access
From another machine or web browser, access the website using the server’s IP address or domain name. The site should load normally but faster than before Varnish was installed, especially on subsequent requests for the same content.
Basic Varnish Statistics
Monitor cache performance in real-time:
varnishstat
This displays continually updating statistics. Key metrics include MAIN.cache_hit (requests served from cache) and MAIN.cache_miss (requests requiring backend fetches). The ratio between hits and misses indicates cache effectiveness. Press ‘q’ to quit.
Viewing Varnish Logs
Detailed request logging aids troubleshooting:
sudo varnishlog
This shows every request Varnish processes with extensive detail. The volume of information can be overwhelming. Filter logs for specific requests or use varnishncsa for Apache-style access logs:
sudo varnishncsa
Step 9: Configuring Firewall Rules
openSUSE uses firewalld for firewall management. Allow HTTP traffic through the firewall for external access.
Understanding openSUSE Firewall (firewalld)
Firewalld organizes rules into zones representing different trust levels. The public zone typically handles external network interfaces. Rules can reference services by name or specific ports.
Opening HTTP Port (80)
Allow HTTP service through the firewall:
sudo firewall-cmd --permanent --add-service=http
The --permanent flag ensures the rule persists across reboots. Alternatively, specify the port directly:
sudo firewall-cmd --permanent --add-port=80/tcp
Opening HTTPS Port (443) if needed
If SSL/TLS termination happens at Varnish level (using tools like Hitch), open HTTPS:
sudo firewall-cmd --permanent --add-service=https
Reloading Firewall Configuration
Changes don’t apply until the firewall reloads:
sudo firewall-cmd --reload
Verify rules are active:
sudo firewall-cmd --list-all
The output displays all active rules for the default zone, confirming HTTP access is allowed.
Security Considerations
The admin interface on port 6082 should never be exposed externally. It allows full control over Varnish configuration without authentication. Firewall rules should block external access to this port. The backend web server port (8080) also requires protection from direct external access since it bypasses caching.
Step 10: Testing Cache Performance
Evaluating cache effectiveness ensures Varnish provides the expected performance improvements.
Understanding Cache Hit vs Miss
Every request results in either a cache hit or miss. Cache hits serve content directly from memory without contacting the backend. Misses require fetching content from the backend web server. High hit ratios indicate effective caching, while low ratios suggest configuration issues or non-cacheable content.
Testing Cache Headers
Use curl to examine caching behavior:
curl -I http://localhost
The first request typically shows Age: 0 indicating fresh content just fetched from the backend. Immediately request the same URL again:
curl -I http://localhost
Now Age shows a positive number (seconds since caching). The X-Varnish header contains two numbers on hits (current request ID and cached object ID) but only one number on misses.
Monitoring Cache Statistics
Launch varnishstat and observe cache behavior while generating requests:
varnishstat
Focus on these key metrics:
- MAIN.cache_hit: cumulative cache hits
- MAIN.cache_miss: cumulative cache misses
- MAIN.cache_hitpass: hits on pass objects (uncacheable)
Calculate hit ratio: (hits / (hits + misses)) × 100. Ratios above 80% indicate good cache performance. Lower ratios suggest investigating what content isn’t being cached and why.
Load Testing with Simple Tools
Apache Bench provides basic load testing:
ab -n 1000 -c 10 http://localhost/
This sends 1000 requests with 10 concurrent connections. Compare response times before and after Varnish. More sophisticated tools like wrk offer advanced features:
wrk -t4 -c100 -d30s http://localhost/
Identifying Non-Cached Content
Some content won’t cache due to headers, cookies, or VCL rules. Use varnishlog to understand cache decisions:
sudo varnishlog -g request -i ReqURL -i VCL_call
This shows which VCL subroutines handle requests and their caching decisions. Look for “pass” decisions which bypass caching entirely.
Advanced VCL Configuration (Optional)
Basic VCL configuration works for many scenarios, but customization optimizes performance for specific needs.
Customizing Cache Behavior
The vcl_recv subroutine processes incoming requests before cache lookup. Modify it to control which requests get cached:
sub vcl_recv {
# Remove marketing tracking parameters
if (req.url ~ "(\?|&)(utm_|gclid=|fbclid=)") {
set req.url = regsub(req.url, "\?.*$", "");
}
}
The vcl_backend_response subroutine handles responses from the backend before storing them in cache. Set custom TTL values:
sub vcl_backend_response {
# Cache static files for one hour
if (bereq.url ~ "\.(jpg|jpeg|png|gif|css|js)$") {
set beresp.ttl = 1h;
}
}
Excluding Specific URLs from Cache
Prevent caching of admin panels and dynamic content:
sub vcl_recv {
if (req.url ~ "^/admin" || req.url ~ "^/wp-admin") {
return (pass);
}
}
The pass action bypasses cache entirely for matching requests.
Custom Error Pages
Display user-friendly error messages when the backend fails:
sub vcl_backend_error {
set beresp.http.Content-Type = "text/html; charset=utf-8";
synthetic({"
<!DOCTYPE html>
<html>
<head><title>Service Temporarily Unavailable</title></head>
<body><h1>We'll be right back!</h1></body>
</html>
"});
return (deliver);
}
Grace Mode Configuration
Serve slightly stale content when the backend becomes unavailable:
sub vcl_backend_response {
set beresp.grace = 1h;
}
sub vcl_recv {
if (!std.healthy(req.backend_hint)) {
unset req.http.Cookie;
}
}
Grace mode improves availability during backend maintenance or failures.
Troubleshooting Common Issues
Even with careful configuration, issues occasionally arise. These troubleshooting techniques resolve common problems.
Varnish Service Won’t Start
Check systemd journal for detailed error messages:
sudo journalctl -xeu varnish
VCL syntax errors prevent startup. The journal shows compilation errors with line numbers. Port conflicts occur if another service already listens on port 80. Use netstat -tulpn | grep :80 to identify the conflicting service.
Insufficient file permissions can prevent reading configuration files. Verify permissions on /etc/varnish/default.vcl allow the varnish user to read the file.
Backend Connection Failures
First confirm the backend web server is running:
systemctl status apache2
Or for Nginx:
systemctl status nginx
Test backend accessibility directly:
curl http://localhost:8080
If this fails, the problem lies with the backend configuration, not Varnish. Verify the backend listens on the correct port. Check VCL configuration specifies the right host and port in the backend definition.
Low Cache Hit Ratio
Analyze why content isn’t caching. Check backend response headers:
curl -I http://localhost:8080/
Cache-Control headers with no-cache, no-store, or max-age=0 prevent caching. Cookies often prevent caching since content might be user-specific. Review VCL for overly aggressive pass rules.
Insufficient cache size causes premature eviction of cached objects. Monitor cache usage with varnishstat and increase allocated memory if needed.
Performance Not Improving
Verify Varnish is actually serving requests by checking response headers for Varnish-specific information. If cache hit ratio remains low, most requests still hit the backend.
Backend performance bottlenecks still affect overall response times since cache misses must wait for backend processing. Optimize the backend web server separately.
Thread pool exhaustion under high load requires tuning thread parameters in VCL or via runtime parameters.
VCL Compilation Errors
Syntax errors appear when attempting to load or compile VCL. Test VCL syntax without starting the service:
sudo varnishd -C -f /etc/varnish/default.vcl
Error messages indicate the line number and nature of syntax problems. Common issues include missing semicolons, unmatched braces, or invalid subroutine names.
Performance Optimization Tips
Fine-tuning Varnish configuration extracts maximum performance from the caching layer.
Tuning Cache Size
Calculate optimal cache size based on content volume and access patterns. Monitor varnishstat for n_lru_nuked (objects evicted to make room). Frequent evictions suggest insufficient cache size. Increase memory allocation in the systemd unit file.
Allocating too much memory starves other system services. Leave adequate RAM for the operating system, backend web server, and other applications. A general rule allocates 50-70% of available RAM to Varnish on dedicated caching servers.
Thread Pool Optimization
Varnish uses thread pools to handle concurrent requests. Default settings work for moderate loads but may require adjustment under heavy traffic:
varnishadm param.set thread_pool_min 50
varnishadm param.set thread_pool_max 5000
varnishadm param.set thread_pool_timeout 120
Monitor thread creation with varnishstat. Thread starvation causes request queuing and increased response times.
Backend Timeout Configuration
Adjust timeouts based on backend performance characteristics. Fast backends benefit from shorter timeouts that fail quickly rather than waiting:
backend default {
.host = "127.0.0.1";
.port = "8080";
.connect_timeout = 3s;
.first_byte_timeout = 30s;
.between_bytes_timeout = 5s;
}
Dynamic content requiring database queries needs longer first_byte_timeout values.
VCL Performance Best Practices
Complex regular expressions in VCL impact performance. Simplify regex patterns where possible. Use simple string matching for exact comparisons. Group multiple conditions efficiently using logical operators.
Minimize unnecessary backend requests by caching as much content as possible. Implement efficient cache invalidation strategies that target specific objects rather than purging everything.
Security Best Practices
Securing Varnish protects both the caching infrastructure and cached content.
Securing Admin Interface
The management interface provides powerful capabilities without authentication. Restrict access via firewall rules allowing only localhost connections. Never expose port 6082 to public networks.
For additional security, configure secret file authentication. Create a secret file:
sudo openssl rand -base64 32 > /etc/varnish/secret
sudo chmod 600 /etc/varnish/secret
Reference this in the ExecStart line with -S /etc/varnish/secret.
Protecting Against Cache Poisoning
Normalize cache keys to prevent attackers from polluting the cache with malicious content. Strip unnecessary query parameters and headers:
sub vcl_recv {
# Remove query parameters from static files
if (req.url ~ "\.(jpg|jpeg|png|gif|css|js)(\?.*)?$") {
set req.url = regsub(req.url, "\?.*$", "");
}
}
Handling Sensitive Content
Never cache authenticated pages or content containing personal information. Exclude authentication areas:
sub vcl_recv {
if (req.http.Authorization || req.http.Cookie ~ "session") {
return (pass);
}
}
Strip Set-Cookie headers from cacheable static resources to improve cache efficiency without security risks.
Regular Updates and Patching
Keep Varnish updated with the latest security patches:
sudo zypper update varnish
Monitor Varnish security advisories and the openSUSE security announce mailing list. Test updates in staging environments before applying to production systems.
Maintenance and Monitoring
Ongoing maintenance ensures Varnish continues performing optimally over time.
Regular Health Checks
Implement automated monitoring to detect service failures or performance degradation. Simple health checks use curl from monitoring systems:
curl -f -s -o /dev/null http://localhost/ || alert
More sophisticated monitoring tracks cache hit ratios, backend health, and response times.
Log Rotation and Management
Varnish logs can grow large quickly. Configure logrotate to manage log files:
sudo nano /etc/logrotate.d/varnish
Add rotation configuration:
/var/log/varnish/*.log {
daily
rotate 7
compress
delaycompress
missingok
postrotate
systemctl reload varnish > /dev/null
endscript
}
Cache Invalidation and Purging
Content updates require cache invalidation. Use varnishadm to purge specific content:
sudo varnishadm "ban req.url ~ /specific-page"
The ban command accepts expressions matching objects to invalidate. Implement purge capabilities in VCL for application-triggered cache clearing.
Updating Varnish
Check for available updates:
zypper list-updates
Update Varnish to the latest version:
sudo zypper update varnish
After major version updates, review VCL syntax changes in the release notes. Test VCL compilation before restarting the service. Roll back updates if compatibility issues arise.
Backing Up Configuration
Maintain backups of all configuration files. At minimum, back up:
- /etc/varnish/default.vcl
- /etc/systemd/system/varnish.service.d/override.conf
- Any custom scripts or configurations
Version control systems like Git track configuration changes over time, enabling easy rollback and audit trails of modifications.
Congratulations! You have successfully installed Varnish. Thanks for using this tutorial for installing the Varnish HTTP Cache on your openSUSE Linux system. For additional help or useful information, we recommend you check the official Varnish website.