UbuntuUbuntu Based

How To Install DeepSeek on Ubuntu 24.04 LTS

Install DeepSeek on Ubuntu 24.04

DeepSeek is an advanced open-source AI model designed for natural language processing tasks that offers powerful capabilities such as text generation, summarization, and complex reasoning. Installing DeepSeek locally on your Ubuntu 24.04 LTS system provides numerous advantages, including enhanced privacy, complete control over your data, and the ability to operate without constant internet connectivity. This comprehensive guide will walk you through the entire installation process, from preparing your system to running and configuring DeepSeek for optimal performance. By following these detailed instructions, you’ll be able to harness the power of cutting-edge AI technology directly from your own machine, eliminating the need for cloud-based alternatives that may compromise your privacy or come with usage limitations.

Prerequisites and System Requirements

Before diving into the installation process, it’s essential to ensure your system meets the necessary requirements to run DeepSeek effectively. The model’s performance depends significantly on your hardware specifications and having the proper software environment set up.

Hardware Requirements

DeepSeek is a sophisticated AI model that requires substantial computational resources to operate efficiently. For optimal performance, your system should have a modern multi-core processor with at least 4 cores. Memory requirements vary depending on the specific DeepSeek model you intend to run. At minimum, 8GB of RAM is necessary, but 16GB or more is strongly recommended for smoother operation and better response times. This is particularly important when running larger model variants that demand more memory to handle their extensive parameter sizes.

While DeepSeek can operate on CPU-only systems, having a compatible NVIDIA GPU will dramatically improve performance by leveraging parallel processing capabilities. A dedicated GPU with at least 6GB of VRAM is recommended for running smaller models, while larger variants may require 8GB or more. Storage space is another crucial consideration – you’ll need at least 10GB of free disk space for the base installation, with additional space required for larger models that can range from 5-10GB for smaller parameter models to 20-50GB for larger ones.

Software Prerequisites

Ubuntu 24.04 LTS serves as the foundation for our DeepSeek installation. Verify your Ubuntu version by running lsb_release -a in the terminal. A stable internet connection is essential for downloading the necessary packages and model files. Depending on your model choice, downloads can range from several gigabytes to tens of gigabytes, so a reliable and reasonably fast connection will make the process much smoother.

Basic familiarity with terminal commands is necessary, as most of the installation process involves using the command line interface. You should be comfortable with navigating directories, editing files, and executing commands with administrator privileges. Additionally, some understanding of system administration concepts like services and network configuration will be helpful for advanced setup options.

Preparing Your Ubuntu System

A well-prepared system ensures a smooth installation process and optimal performance for DeepSeek. Taking the time to properly update your system and install essential dependencies will help prevent common issues during installation.

Updating Your System

Before proceeding with any installation, it’s crucial to ensure your Ubuntu 24.04 system is fully up to date. Open a terminal window and execute the following commands to update your package repositories and upgrade existing packages:

sudo apt update && sudo apt upgrade -y

This command synchronizes your package index with the Ubuntu repositories and installs available upgrades for all installed packages. The -y flag automatically confirms any prompts during the upgrade process. After the upgrade completes, it’s a good practice to reboot your system to ensure all updates are properly applied, especially if kernel updates were installed:

sudo reboot

Rebooting ensures that any system-level changes take effect and provides a clean slate for the installation process, reducing the likelihood of conflicts or unexpected behavior.

Installing Essential Dependencies

DeepSeek relies on several key components, including Python, pip (Python’s package manager), and Git for repository management. Most Ubuntu 24.04 installations come with Python pre-installed, but it’s important to verify the version and install any missing components:

sudo apt install python3 python3-pip git -y

After installation, verify that all components are correctly installed and check their versions:

python3 --version
pip3 --version
git --version

DeepSeek requires Python 3.8 or higher, so ensure your installed version meets this requirement. If you need to install a specific Python version, consider using tools like pyenv for managing multiple Python environments. These essential dependencies provide the foundation for installing both Ollama (which will manage the DeepSeek model) and the Web UI interface that will make interacting with DeepSeek more intuitive.

Installing Ollama Platform

Ollama is a crucial component in our DeepSeek installation as it provides the infrastructure needed to effectively manage and run large language models locally.

What is Ollama?

Ollama is a specialized platform designed to simplify the process of running powerful large language models like DeepSeek on local machines. It handles the complex tasks of model management, optimization, and execution, providing a streamlined interface for users to interact with these sophisticated AI systems. Think of Ollama as the engine that powers DeepSeek, managing memory allocation, processing requests, and delivering responses in an efficient manner.

One of Ollama’s key advantages is its ability to abstract away much of the technical complexity involved in running AI models. Without such a platform, users would need to manually handle model weights, configure inference parameters, and manage system resources – tasks that require significant technical expertise. Ollama automates these processes, allowing even those with limited technical knowledge to harness the capabilities of advanced AI models like DeepSeek.

Ollama Installation Process

Installing Ollama on Ubuntu 24.04 is straightforward using the official installation script. Open a terminal and execute the following command:

curl -fsSL https://ollama.com/install.sh | sh

This command downloads and runs the Ollama installation script, which automatically sets up the necessary components on your system. After installation completes, verify that Ollama was installed correctly by checking its version:

ollama --version

You should see the current version number displayed, confirming a successful installation. Next, start the Ollama service and configure it to launch automatically at system startup:

sudo systemctl start ollama
sudo systemctl enable ollama

The first command starts the Ollama service immediately, while the second ensures it will start automatically every time your system boots. To verify that the service is running correctly, check its status:

sudo systemctl status ollama

You should see output indicating that the service is active and running. If you encounter any issues, the status output will provide diagnostic information that can help identify the problem. With Ollama successfully installed and running, you’ve established the foundation needed to download and run DeepSeek models.

Downloading and Installing DeepSeek Models

With Ollama now running on your system, the next step is to download and install your chosen DeepSeek model. DeepSeek offers multiple model variants to suit different needs and hardware capabilities.

Understanding DeepSeek Model Variants

DeepSeek offers several model sizes identified by the number of parameters they contain. The most common variants include the DeepSeek-R1 series with sizes ranging from 1.5 billion parameters (1.5b) to much larger models with 7b, 8b, 32b, and even 70b parameters. Each model size represents a tradeoff between capability and resource requirements, with larger models generally providing more sophisticated responses but demanding more memory and processing power.

Smaller models like the 1.5b variant require less RAM and can run reasonably well on CPU-only systems, making them suitable for machines with limited resources. These models occupy approximately 5-10GB of disk space. Mid-sized models (7b-8b) offer a good balance between performance and resource consumption, requiring moderate GPU capabilities and around 10-15GB of disk space. The largest models (32b-70b) deliver the most impressive results but demand powerful GPUs and substantial RAM, along with 20-50GB or more of storage space.

Consider your hardware capabilities and specific use case when selecting a model. For general experimentation on modest hardware, starting with the 1.5b or 7b model is recommended. As you become more familiar with DeepSeek and potentially upgrade your hardware, you can explore larger models with enhanced capabilities.

Downloading Your Chosen Model

Once you’ve decided which DeepSeek model variant best suits your needs and hardware capabilities, you can download it using Ollama with a simple command. For example, to download the DeepSeek-R1 7b model, run:

ollama run deepseek-r1:7b

For a smaller model that requires fewer resources, you might prefer:

ollama run deepseek-r1:1.5b

When you execute this command, Ollama will automatically download the requested model from its repository. The download can take several minutes to complete depending on your internet connection speed and the model size. You’ll see a progress indicator showing how much of the model has been downloaded and the estimated time remaining.

After the download completes, the model will be loaded into memory and you’ll be presented with a prompt where you can start interacting with DeepSeek immediately. Try asking a simple question to verify that the model is working correctly:

Tell me what you can do.

The model should respond with information about its capabilities. To exit this interactive mode, type /exit or press Ctrl+C. You can verify that the model is available in your local Ollama library by running:

ollama list

This command displays all the models you have downloaded, including your newly installed DeepSeek model.

Setting Up Open WebUI

While the command-line interface is functional, many users prefer a more intuitive graphical interface for interacting with AI models. Open WebUI provides an excellent solution for this purpose.

Benefits of Using a Web Interface

A web interface transforms how you interact with DeepSeek, making it significantly more accessible and user-friendly. Instead of typing commands in a terminal, you’ll have a chat-like interface reminiscent of popular services like ChatGPT, complete with conversation history, formatting options, and an intuitive design. This familiar format makes it easier to formulate queries, visualize responses, and maintain context across multiple interactions.

The Open WebUI also excels at document processing tasks, allowing you to upload files for DeepSeek to analyze, summarize, or extract information from. This capability is particularly valuable for research, content creation, and data analysis workflows. Additionally, the web interface provides access to model settings and parameters without requiring you to remember specific command syntaxes, making it easier to optimize DeepSeek’s performance for different tasks.

For teams or households where multiple users might want to access DeepSeek, the web interface enables sharing the resource across your local network. This means DeepSeek can be accessed from any device with a web browser connected to your network, extending its utility beyond just the machine where it’s installed.

Installing Python Virtual Environment

Before installing Open WebUI, it’s advisable to create a Python virtual environment to isolate its dependencies from your system-wide Python installation. This approach prevents potential conflicts between packages and makes it easier to manage the installation.

First, install the necessary tools for creating virtual environments:

sudo apt install python3-venv -y

Next, create a new virtual environment in your home directory or another location of your choice:

python3 -m venv ~/open-webui-venv

This command creates a new directory called open-webui-venv containing a self-contained Python environment. To activate this environment, run:

source ~/open-webui-venv/bin/activate

You’ll notice your terminal prompt changes to indicate that the virtual environment is active. Any Python packages you install while the environment is active will be isolated to this environment, not affecting your system-wide Python installation.

Installing Open WebUI

With your virtual environment activated, you can now install Open WebUI using pip:

pip install open-webui

This command downloads and installs Open WebUI along with all its dependencies. The installation process may take a few minutes as it fetches and configures numerous packages. If you’re on a system without a GPU and want to avoid installing NVIDIA/CUDA dependencies, you can use a special installation flag:

pip install open-webui --no-deps
pip install -e ".[cpu]"

This alternative installation method avoids GPU-specific packages that might not be relevant for your system.

Starting the Web Interface

Once the installation is complete, you can start the Open WebUI server with a simple command:

open-webui serve

By default, the server will bind to localhost (127.0.0.1) on port 8080. This means you can access the interface by opening a web browser on the same machine and navigating to http://localhost:8080. If you want to make the interface accessible from other devices on your network, you can specify the host address when starting the server:

open-webui serve --host 0.0.0.0

This binds the server to all network interfaces, making it accessible from other devices on your local network.

When you first access the Web UI, you’ll need to create an account for authentication. After logging in, you’ll see a chat interface similar to other AI assistants. From a dropdown menu, select the DeepSeek model you installed earlier and start interacting with it through this more user-friendly interface.

Configuring the System for Automatic Startup

To maximize convenience, you can configure both Ollama and Open WebUI to start automatically when your system boots, ensuring DeepSeek is always ready to use without manual intervention.

Creating SystemD Service for Ollama

Ollama installs its own systemd service by default, but it’s good to verify that it’s properly enabled. Check the status of the Ollama service:

sudo systemctl status ollama

If the service isn’t active or enabled, you can set it up with these commands:

sudo systemctl start ollama
sudo systemctl enable ollama

This ensures that Ollama starts automatically whenever your system boots, making the DeepSeek model available without manual intervention. The service is configured to manage system resources appropriately and handle any errors or crashes that might occur during operation.

Setting Up Open WebUI as a Service

Unlike Ollama, Open WebUI doesn’t automatically create a systemd service during installation. You’ll need to create one manually to enable automatic startup:

sudo nano /etc/systemd/system/open-webui.service

This opens a text editor where you can create a new service definition. Paste the following configuration, making sure to replace your_username with your actual username:

[Unit]
Description=Open WebUI Service
After=network.target ollama.service

[Service]
User=your_username
WorkingDirectory=/home/your_username
ExecStart=/home/your_username/open-webui-venv/bin/open-webui serve --host 0.0.0.0
Restart=always
Environment="PATH=/home/your_username/open-webui-venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

[Install]
WantedBy=multi-user.target

Save the file and exit the editor (in nano, press Ctrl+O, Enter, then Ctrl+X). Next, reload the systemd configuration to recognize the new service:

sudo systemctl daemon-reload

Now enable and start the Open WebUI service:

sudo systemctl enable open-webui.service
sudo systemctl start open-webui.service

Check the status to ensure it’s running correctly:

sudo systemctl status open-webui.service

With both services configured, DeepSeek and its web interface will automatically start whenever your system boots, providing a seamless experience similar to cloud-based AI services but with the privacy and control advantages of local deployment.

Accessing DeepSeek Over Local Network

One of the major advantages of running DeepSeek locally is the ability to access it from multiple devices on your local network, effectively creating your own private AI service for your home or office.

Network Configuration

By default, Open WebUI binds only to localhost (127.0.0.1), which means it’s accessible only from the machine it’s running on. To make it available across your local network, you need to configure it to bind to your machine’s network IP address or to all available network interfaces (0.0.0.0).

If you followed the previous section on creating a systemd service, you’ve already configured Open WebUI to bind to all interfaces with the --host 0.0.0.0 parameter. If you’re starting Open WebUI manually, use:

open-webui serve --host 0.0.0.0

You’ll also need to ensure that your firewall allows incoming connections on the port Open WebUI uses (default is 8080). If you’re using UFW (Uncomplicated Firewall), you can add an exception with:

sudo ufw allow 8080/tcp

This opens port 8080 for TCP connections, allowing other devices on your network to connect to Open WebUI.

Connecting from Other Devices

Once properly configured, you can access DeepSeek from any device on your local network by opening a web browser and navigating to the IP address of the machine running DeepSeek, followed by the port number:

http://192.168.1.xxx:8080

Replace 192.168.1.xxx with the actual IP address of your Ubuntu machine. You can find this address by running ip a or hostname -I in the terminal.

When accessing from another device, you’ll be prompted to log in with the same credentials you created when first setting up Open WebUI. After logging in, you’ll have full access to DeepSeek’s capabilities from that device, just as if you were using it directly on the host machine.

For enhanced security, especially if you’re in a shared environment, consider implementing additional authentication measures or using a VPN if you need to access DeepSeek remotely from outside your local network.

Practical Usage Guide

Now that DeepSeek is installed and running, let’s explore how to effectively utilize its capabilities for various tasks.

Basic DeepSeek Commands

Whether you’re using the terminal interface or the Web UI, DeepSeek offers a wide range of capabilities for text generation, question answering, and more. Here are some practical examples to get you started:

For simple question answering, you can ask factual questions like:

Explain how photosynthesis works in plants.

For creative text generation, try prompts like:

Write a short story about a robot discovering emotions for the first time.

DeepSeek excels at document summarization. In the Web UI, you can upload documents and ask:

Summarize the main points of this document in three paragraphs.

For programming assistance, you can request code examples:

Write a Python function that sorts a list of dictionaries based on a specific key.

When using the command-line interface, you can run DeepSeek with specific queries directly:

ollama run deepseek-r1:7b --query "Explain the difference between machine learning and deep learning."

This approach is particularly useful for scripting or integrating DeepSeek into other workflows.

Fine-tuning Model Parameters

DeepSeek’s behavior can be adjusted by modifying various parameters that control how it generates text:

Temperature: This parameter controls the randomness of responses. Lower values (e.g., 0.1) make outputs more deterministic and focused, while higher values (e.g., 0.8) introduce more creativity and variability. Adjust this parameter based on whether you need precise, factual responses or more creative output:

ollama run deepseek-r1:7b --temperature 0.3

Context Window: This determines how much previous conversation context DeepSeek considers when generating responses. A larger context window allows for more coherent multi-turn conversations but consumes more memory:

ollama run deepseek-r1:7b --context-length 4096

Top-p (Nucleus Sampling): This parameter, also known as nucleus sampling, controls diversity by considering only the most likely tokens whose cumulative probability exceeds the specified value. A value of 0.9 means DeepSeek will only consider tokens in the top 90% of probability mass:

ollama run deepseek-r1:7b --top-p 0.9

Top-k: This parameter limits token selection to the k most likely next tokens. A lower value (e.g., 40) makes responses more focused and deterministic:

ollama run deepseek-r1:7b --top-k 40

In the Web UI, these parameters can be adjusted through the settings interface, making it easier to experiment with different configurations without remembering command-line syntax.

Troubleshooting Common Issues

Even with careful installation, you might encounter some issues when setting up or running DeepSeek. Here are solutions to common problems:

Installation Errors

If you encounter errors during Ollama installation, verify that you have the necessary permissions:

sudo chmod +x /path/to/install.sh

For Python dependency issues with Open WebUI, ensure your virtual environment is properly activated before installation:

source ~/open-webui-venv/bin/activate

If you see errors related to missing CUDA libraries but don’t have a GPU, use the CPU-only installation method for Open WebUI:

pip install open-webui --no-deps
pip install -e ".[cpu]"

Network-related installation problems often stem from firewall or proxy settings. Temporarily disable your firewall to test if it’s causing the issue:

sudo ufw disable

Remember to re-enable it after testing:

sudo ufw enable

If Open WebUI fails to start with “ModuleNotFoundError” errors, reinstall it while ensuring all dependencies are properly resolved:

pip install --upgrade pip
pip install open-webui --force-reinstall

Performance Optimization

If DeepSeek runs slowly on your system, consider these optimization strategies:

For CPU-only systems, use smaller models like deepseek-r1:1.5b instead of larger variants. The performance difference is substantial, with smaller models responding in seconds rather than minutes on modest hardware.

Adjust your system’s swap space to provide additional virtual memory when physical RAM is exhausted:

sudo fallocate -l 8G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

To make this swap permanent, add it to your fstab:

echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

Close resource-intensive applications before running DeepSeek to free up memory and processing power. Browser tabs, video editing software, and games can consume significant resources that DeepSeek could use.

If using a GPU, ensure your drivers are up to date. For NVIDIA GPUs:

ubuntu-drivers devices
sudo ubuntu-drivers autoinstall

A system reboot is often required after driver updates.

Advanced Configuration Options

As you become more familiar with DeepSeek, you might want to explore advanced configurations that enhance its functionality.

Running Multiple Models

Ollama supports running multiple models simultaneously, allowing you to compare their outputs or select different models for specific tasks. You can download multiple DeepSeek variants or even mix DeepSeek with other models available in the Ollama library:

ollama pull deepseek-r1:1.5b
ollama pull deepseek-r1:7b

To list all installed models:

ollama list

You can switch between models easily in the Web UI by selecting from the model dropdown menu. For command-line usage, simply specify the model name:

ollama run deepseek-r1:1.5b

Then switch to another model:

ollama run deepseek-r1:7b

This flexibility allows you to use smaller models for quick tasks and larger models for more complex queries, optimizing both performance and quality based on your specific needs.

Congratulations! You have successfully installed DeepSeek. Thanks for using this tutorial for installing the DeepSeek AI model on Ubuntu 24.04 LTS system. For additional help or useful information, we recommend you check the official DeepSeek website.

VPS Manage Service Offer
If you don’t have time to do all of this stuff, or if this is not your area of expertise, we offer a service to do “VPS Manage Service Offer”, starting from $10 (Paypal payment). Please contact us to get the best deal!

r00t

r00t is an experienced Linux enthusiast and technical writer with a passion for open-source software. With years of hands-on experience in various Linux distributions, r00t has developed a deep understanding of the Linux ecosystem and its powerful tools. He holds certifications in SCE and has contributed to several open-source projects. r00t is dedicated to sharing her knowledge and expertise through well-researched and informative articles, helping others navigate the world of Linux with confidence.
Back to top button