Today I’ll show you how to Install DeepSeek-R1 Ai locally on Windows, macOS or Linux PC with a GUI.
STEP 1: Install Ollama
First, we need to download and Install Ollama. Download based on your Operating System.

STEP 2: Download Deepseek-R1
After installing Ollama, we need to download Deepseek-R1 model.
Open powershell on windows or terminal on mac and linux and type in the command.
Select only one of the options in red depending on your Computer Hardware capacity.
ollama run deepseek-r1:1.5b
# Size: 1.5gb
ollama run deepseek-r1:7b
# Size: 4.7gb
ollama run deepseek-r1:8b
# Size: 4.9gb
ollama run deepseek-r1:14b
# Size: 9.0gb
ollama run deepseek-r1:32b
# Size: 20gb
ollama run deepseek-r1:70b
# Size: 43gb
ollama run deepseek-r1:671b
# Size: 404gb
Now Deepseek-R1 has been installed and can now be used on terminal. We would proceed to setup Open WebUI GUI for a better experience.
STEP 3: Setup Docker
Download and Install Docker Desktop
Register or SignIn to Docker.
Now run one of the command based on your preference.
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
To run Open WebUI with Nvidia GPU support, use this command:
docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
Installing Open WebUI with Bundled Ollama Support
GPU Support: Utilize GPU resources by running the following command:
docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
CPU Only: If you’re not using a GPU, use this command instead:
docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
You can access Deepthink-R1 on Open WebUI at http://localhost:3000