How to Run DeepSeek R1 Locally (Using Ollama + ChatboxAI)

If you want to run DeepSeek R1 locally on your system, there’s no need to worry. This guide is written in a simple and easy-to-follow manner, explaining step-by-step how to use Ollama and ChatboxAI to get it running.

🖥️ System Requiremen…


This content originally appeared on DEV Community and was authored by kshitij Bhatnagar

If you want to run DeepSeek R1 locally on your system, there's no need to worry. This guide is written in a simple and easy-to-follow manner, explaining step-by-step how to use Ollama and ChatboxAI to get it running.

🖥️ System Requirements (Based on GPU/RAM)

Each model has different hardware requirements, so first, check which model your system can support:

Model GPU Required VRAM (GPU Memory) RAM (System Memory) Storage (ROM)
DeepSeek R1 1.5B No GPU / Integrated GPU 4GB+ 8GB+ 10GB+
DeepSeek R1 7B GTX 1650 / RTX 3050 6GB+ 16GB+ 30GB+
DeepSeek R1 14B RTX 3060 / RTX 4060 12GB+ 32GB+ 60GB+
DeepSeek R1 33B RTX 4090 / A100 24GB+ 64GB+ 100GB+
  • 👉 If your system has GTX 1650 or lower, you can only run DeepSeek R1 1.5B or at most 7B.
  • 👉 For 7B, at least 16GB RAM is required.
  • 👉 If you have a GPU lower than GTX 1650 (or an integrated GPU), only use 1.5B to avoid crashes.

⚙️ Step-by-Step Installation Guide

1️⃣ Install Ollama (Base for Llama Models)

Ollama is a lightweight tool that helps run LLMs (Large Language Models) locally. Install it first:

🔗 Ollama Installation Link

👉 For Windows Users:

  • Download the installer and install it (just click Next-Next).
  • Open CMD and check by running:
  ollama run llama2

If this command runs successfully, the installation is complete.

👉 For Mac Users:

  • Open Terminal and run:
curl -fsSL https://ollama.com/install.sh | sh

2️⃣ Download the DeepSeek R1 Model

Use the following command to pull the model:

ollama pull deepseek-ai/deepseek-coder:7b
  • 👉 If you want to run 1.5B instead of 7B, use:
ollama pull deepseek-ai/deepseek-coder:1.5b

⚠ This download may take some time depending on your internet speed. Once downloaded, you can run it using Ollama.

3️⃣ Install ChatboxAI (Optional GUI for Better Experience)

If you want a Graphical User Interface (GUI), ChatboxAI is the best tool to interact with local AI models.

🔗 ChatboxAI Installation Link

Installation Steps:

  • Ensure Python 3.10+ is installed.
  • Open Command Prompt (CMD) and run:
git clone https://github.com/oobabooga/text-generation-webui.git
cd text-generation-webui
pip install -r requirements.txt
  • Start the server:
python server.py
  • Open your browser and go to localhost:7860, then select your model.

🚀 Running DeepSeek R1 (Final Step)

Once everything is installed, it’s time to run the model:

👉 Open CMD and run:

ollama run deepseek-ai/deepseek-coder:7b

👉 If 7B is not running, try with 1.5B:

ollama run deepseek-ai/deepseek-coder:1.5b

👉 If you are using ChatboxAI, just open the browser and interact with the model through the GUI.

Now you can use DeepSeek R1 for coding, AI chat, and optimizing your workflow! 😎🔥

🛠️ Common Problems & Solutions

❌ 1️⃣ Model crashes due to low VRAM?
✔ Try 1.5B instead of 7B.
✔ Increase Windows Pagefile (Virtual Memory settings).

❌ 2️⃣ Model response is too slow?
✔ Use SSD instead of HDD.
✔ Close background applications.
✔ Optimize RAM usage.

❌ 3️⃣ ‘Command not found’ error in CMD?
✔ Check if Ollama is installed correctly.
✔ Ensure Python and dependencies are installed.

🤩 Conclusion

If you followed this guide correctly, you can now run DeepSeek R1 locally without relying on third-party APIs. This is a privacy-friendly and cost-effective solution, perfect for developers and freelancers.

If you face any issues, drop a comment, and you’ll get help! 🚀🔥


This content originally appeared on DEV Community and was authored by kshitij Bhatnagar


Print Share Comment Cite Upload Translate Updates
APA

kshitij Bhatnagar | Sciencx (2025-02-20T02:09:58+00:00) How to Run DeepSeek R1 Locally (Using Ollama + ChatboxAI). Retrieved from https://www.scien.cx/2025/02/20/how-to-run-deepseek-r1-locally-using-ollama-chatboxai/

MLA
" » How to Run DeepSeek R1 Locally (Using Ollama + ChatboxAI)." kshitij Bhatnagar | Sciencx - Thursday February 20, 2025, https://www.scien.cx/2025/02/20/how-to-run-deepseek-r1-locally-using-ollama-chatboxai/
HARVARD
kshitij Bhatnagar | Sciencx Thursday February 20, 2025 » How to Run DeepSeek R1 Locally (Using Ollama + ChatboxAI)., viewed ,<https://www.scien.cx/2025/02/20/how-to-run-deepseek-r1-locally-using-ollama-chatboxai/>
VANCOUVER
kshitij Bhatnagar | Sciencx - » How to Run DeepSeek R1 Locally (Using Ollama + ChatboxAI). [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/02/20/how-to-run-deepseek-r1-locally-using-ollama-chatboxai/
CHICAGO
" » How to Run DeepSeek R1 Locally (Using Ollama + ChatboxAI)." kshitij Bhatnagar | Sciencx - Accessed . https://www.scien.cx/2025/02/20/how-to-run-deepseek-r1-locally-using-ollama-chatboxai/
IEEE
" » How to Run DeepSeek R1 Locally (Using Ollama + ChatboxAI)." kshitij Bhatnagar | Sciencx [Online]. Available: https://www.scien.cx/2025/02/20/how-to-run-deepseek-r1-locally-using-ollama-chatboxai/. [Accessed: ]
rf:citation
» How to Run DeepSeek R1 Locally (Using Ollama + ChatboxAI) | kshitij Bhatnagar | Sciencx | https://www.scien.cx/2025/02/20/how-to-run-deepseek-r1-locally-using-ollama-chatboxai/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.