Wget Command in Linux with Examples
In this tutorial, we will show you learn wget
command in Linux with our comprehensive guide! In the realm of Linux system administration and development, the wget
command stands out as an indispensable tool for downloading files from the internet. This article delves deep into the intricacies of the wget
command, providing a detailed exploration of its features, usage, and practical applications. From basic downloads to advanced mirroring techniques, you’ll gain a solid understanding of how to leverage wget
for efficient file management in your Linux environment.
Understanding Wget
What is Wget?
Wget
, short for “Web Get,” is a free and open-source command-line utility used for retrieving content from web servers. It supports HTTP, HTTPS, and FTP protocols, making it versatile for various download tasks. Unlike graphical download managers, wget
operates in a non-interactive mode, ideal for scripting and automation. Think of it as your automated downloader, tirelessly fetching files without needing constant supervision.
While other tools like curl
and ftp
can also download files, wget
distinguishes itself with its robustness and ability to handle unstable network connections. Curl
is excellent for transferring data, but wget
excels at downloading files and entire websites recursively. Ftp
, while a reliable protocol, is often less secure than HTTPS, which wget
fully supports.
Key Features of Wget
- Non-Interactive Download Capability:
Wget
can operate without user intervention, making it perfect for scripts and automated tasks. Set it up, and let it run! - Protocol Support: Supporting HTTP, HTTPS, and FTP,
wget
can handle a wide variety of download sources. Versatility is its strength. - Resilience to Network Interruptions:
Wget
can resume interrupted downloads, saving time and bandwidth. Network glitch? No problem forwget
. - Recursive Downloads: Download entire websites or directories with ease using recursive downloading. Grab everything you need in one go.
- Bandwidth Throttling: Limit the download speed to avoid overwhelming the network. Be a considerate network user.
Installation of Wget
Before diving into usage, ensure wget
is installed on your system. Most Linux distributions come with wget
pre-installed. To check, open your terminal and type:
wget --version
If wget
is installed, the version information will be displayed. If not, follow the instructions below for your specific distribution.
Ubuntu/Debian:
sudo apt update
sudo apt install wget
CentOS/RHEL/Fedora/Rocky Linux/AlmaLinux:
sudo yum install wget
Or, using dnf:
sudo dnf install wget
Arch Linux:
sudo pacman -S wget
After installation, verify by running wget --version
again.
Basic Usage of Wget
Basic Syntax
The fundamental syntax of the wget
command is straightforward:
wget [options] [url]
wget
: The command itself.[options]
: Optional flags to modify the download behavior.[url]
: The URL of the file to download.
Downloading a Single File
To download a single file, simply provide the URL:
wget https://example.com/file.txt
This command downloads file.txt
from example.com
and saves it in the current directory. The output in the terminal will show the download progress, speed, and file size.
Example Output:
--2025-14-02 10:00:00-- https://example.com/file.txt
Resolving example.com (example.com)... 93.184.216.34, 2606:2800:220:1:com:8:81b:e50d
Connecting to example.com (example.com)|93.184.216.34|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 12345 (12K) [text/plain]
Saving to: ‘file.txt’
100%[======================================>] 12,345 --.-KB/s in 0.01s
2025-14-02 10:00:00 (1.02 MB/s) - ‘file.txt’ saved [12345/12345]
The output provides valuable information:
- Resolving example.com: The DNS lookup process.
- Connecting to example.com: Establishing a connection to the server.
- HTTP request sent, awaiting response: The request being sent and the server’s response code (200 OK indicates success).
- Length: 12345 (12K): The size of the file.
- Saving to: ‘file.txt’: The name the file will be saved as.
- 100%[…]: The download progress bar.
- (1.02 MB/s): The download speed.
- ‘file.txt’ saved: Confirmation that the download is complete.
Downloading Files to a Specific Directory
To save the downloaded file to a specific directory, use the -P
option (or --directory-prefix
):
wget -P /path/to/directory https://example.com/file.txt
Replace /path/to/directory
with the actual path to the desired directory. For example:
wget -P /home/user/downloads https://example.com/file.txt
This saves file.txt
to the /home/user/downloads
directory.
Renaming Files During Download
The -O
option (or --output-document
) allows you to rename the file as it’s being downloaded:
wget -O new_file_name.txt https://example.com/file.txt
This downloads file.txt
but saves it as new_file_name.txt
in the current directory.
Advanced Wget Options
Downloading Multiple Files
Wget
can download multiple files using the -i
option (or --input-file
). Create a text file (e.g., urls.txt
) with each URL on a new line:
https://example.com/file1.txt
https://example.com/file2.txt
https://example.com/file3.txt
Then, use the command:
wget -i urls.txt
Wget
will download each file listed in urls.txt
.
Recursive Downloads
The -r
option (or --recursive
) enables recursive downloading, allowing you to download entire websites or directories. Be cautious when using this option, as it can consume significant bandwidth and storage.
wget -r https://example.com/directory/
This command downloads the /directory/
and all its contents, including subdirectories and files. To limit the depth of recursion, use the -l
option (or --level
):
wget -r -l 2 https://example.com/directory/
This limits the recursion depth to 2 levels.
To convert the links in the downloaded files to point to local files (for offline browsing), use the -k
option (or --convert-links
) and the -p
option (or --page-requisites
):
wget -r -k -p https://example.com/
This downloads the website and adjusts the links for local viewing.
Limiting Download Speed
To prevent wget
from consuming all available bandwidth, use the --limit-rate
option. Specify the desired rate in kilobytes per second (k) or megabytes per second (m):
wget --limit-rate=200k https://example.com/large_file.zip
This limits the download speed to 200 KB/s.
Resuming Interrupted Downloads
The -c
option (or --continue
) allows wget
to resume an interrupted download. This is particularly useful for large files or unstable network connections.
wget -c https://example.com/large_file.zip
If the download is interrupted, running the same command again with the -c
option will resume the download from where it left off.
Creating a Mirror of a Website
The --mirror
option combines several options to create a local mirror of a website. It’s equivalent to using -r -N -l inf --no-remove-listing
.
wget --mirror https://example.com/
-r
: Recursive download.-N
: Only download files newer than the local versions.-l inf
: Infinite recursion depth.--no-remove-listing
: Keep the server’s directory listing files.
To make the mirror suitable for offline browsing, add the -k
and -p
options:
wget --mirror -k -p https://example.com/
Practical Examples of Wget
Example Scenarios
Downloading Images from a Website:
Suppose you want to download all the images from a specific webpage. You can use wget
in conjunction with other command-line tools like grep
and sed
.
wget -q -O - https://example.com/gallery.html | grep -o 'img src="[^"]*"' | sed 's/img src="//' | xargs -n 1 wget
This command does the following:
wget -q -O - https://example.com/gallery.html
: Downloads the HTML content of the page and outputs it to standard output.grep -o 'img src="[^"]*"'
: Extracts allimg src
attributes from the HTML.sed 's/img src="//'
: Removes theimg src="
prefix, leaving only the image URLs.xargs -n 1 wget
: Passes each image URL towget
for downloading.
Bulk Downloading Software Packages:
Imagine you have a list of software packages to download. Create a file named packages.txt
with the URLs of the packages:
https://example.com/package1.deb
https://example.com/package2.rpm
https://example.com/package3.tar.gz
Then, use wget
to download all the packages:
wget -i packages.txt
Using Wget in Scripts
Wget
‘s non-interactive nature makes it ideal for use in scripts. Here’s an example script demonstrating automated downloads:
#!/bin/bash
# Script to download files from a list of URLs
URLS_FILE="urls.txt"
DOWNLOAD_DIR="/home/user/downloads"
# Check if the URLs file exists
if [ ! -f "$URLS_FILE" ]; then
echo "Error: $URLS_FILE not found."
exit 1
fi
# Create the download directory if it doesn't exist
mkdir -p "$DOWNLOAD_DIR"
# Loop through the URLs in the file
while read URL; do
# Download the file
wget -P "$DOWNLOAD_DIR" "$URL"
# Check the exit status of wget
if [ $? -eq 0 ]; then
echo "Successfully downloaded: $URL"
else
echo "Failed to download: $URL"
fi
done < "$URLS_FILE"
echo "Download process completed."
Save this script to a file (e.g., download_script.sh
), make it executable:
chmod +x download_script.sh
And then run it:
./download_script.sh
This script reads URLs from urls.txt
, downloads each file to the /home/user/downloads
directory, and provides feedback on the success or failure of each download.
Troubleshooting Common Issues
Common Errors and Solutions
Handling ‘404 Not Found’ Errors:
A ‘404 Not Found’ error indicates that the requested file does not exist on the server. Double-check the URL for typos or broken links. The file might have been moved or deleted.
Dealing with Connection Timeouts:
Connection timeouts occur when wget
cannot establish a connection with the server within a certain time. This could be due to network issues or a server that is down or overloaded. Try increasing the timeout value using the --timeout
option:
wget --timeout=60 https://example.com/file.txt
This sets the timeout to 60 seconds. If the issue persists, check your network connection or try again later.
Ensuring Successful Downloads:
- Verify the URL: Ensure the URL is correct and accessible.
- Check Disk Space: Make sure you have enough free disk space to store the downloaded files.
- Firewall Settings: Firewall rules might be blocking
wget
‘s access to the internet. Adjust your firewall settings if necessary. - Proxy Settings: If you are behind a proxy server, configure
wget
to use the proxy. Use the--proxy
option:wget --proxy=on -e use_proxy=yes -e http_proxy=http://your_proxy_address:port https://example.com/file.txt