Home > News

DeepSeek has made its way onto the Raspberry Pi after the hype over its lower hardware requirement

The performance is quite slow but it is still functional
Last Updated on
DeepSeek has made its way onto the Raspberry Pi after the hype over its lower hardware requirement
PC Guide is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Read More

For those who haven’t caught up with the Blue AI Whale that’s been making waves everywhere, DeepSeek is an LLM from China that has not only caused major turbulence in the global AI sector but also sparked intense competition domestically. What has made DeepSeek such a prominent figure in the public eye is that it’s the largest open-source large language model yet, doesn't cost as much as other AI models, and, as Michael Pound, a researcher at the University of Nottingham, put it, has “changed the game a little bit” by showing “you can train with more limited hardware.”

One recent example of its low computational requirements comes from AMD, which shared DeepSeek's R1 AI inference benchmarks for its RDNA 3-based flagship GPU, the RX 7900 XTX, demonstrating superior performance and encouraging users to run the model on their local AMD machines. Now, another example of DeepSeek running on less powerful hardware has surfaced, as the Raspberry Pi is managing to squeeze the popular AI model onto its boards.

🚀 Save Up to $1,200 on the Samsung Galaxy S25!

Pre-order now and save big with trade-in and Samsung credit. Limited time only!

*Includes trade-in value + $300 Samsung credit.

DeepSeek on Raspberry Pi – functional, but not quite powerful

News of Raspberry Pi’s DeepSeek experiment comes from Jeff Geerling, a popular tech YouTuber and mad scientist, who has managed to run DeepSeek on a Raspberry Pi 5. But it's not quite as powerful as you might expect. First off, the version of DeepSeek Jeff deployed on the Raspberry Pi 5 is not the same one that recently topped performance charts. Instead, he had to roll back to DeepSeek R1:14b to get things running. Even then, the Pi managed to output a token every 1.2 seconds, meaning it would take about that long to “type” a single word in response to a question.

Jeff goes into a lot more detail in his video, which we highly recommend checking out, but one interesting point he made is that you could technically speed things up significantly by using an external graphics card. While DeepSeek can run on less powerful hardware, that doesn't mean it can outperform other LLMs on a Raspberry Pi alone – you still need external resources for faster VRAM. For example, Jeff paired his Raspberry Pi with an AMD Pro W7700 graphics card, which comes with 16GB of VRAM, and saw a 10x improvement, achieving around 20 to 50 tokens per second.

Now, running GPUs on a Raspberry Pi is a whole other story, but in essence, running DeepSeek on a Pi is possible. The real question is whether it's practical without an external boost. That said, this is just the beginning of how the Raspberry Pi community will explore ways to enhance DeepSeek's performance on their boards.

About the Author

Hassam boasts over seven years of professional experience as a dedicated PC hardware reviewer and writer.