DeepSeek’s distilled new R1 AI mannequin can run on a single GPU | TechCrunch


DeepSeek’s up to date R1 reasoning AI mannequin could be getting the majority of the AI neighborhood’s consideration this week. However the Chinese language AI lab additionally launched a smaller, “distilled” model of its new R1, DeepSeek-R1-0528-Qwen3-8B, that DeepSeek claims beats comparably-sized fashions on sure benchmarks.

The smaller up to date R1, which was constructed utilizing the Qwen3-8B mannequin Alibaba launched in Might as a basis, performs higher than Google’s Gemini 2.5 Flash on AIME 2025, a set of difficult math questions.

DeepSeek-R1-0528-Qwen3-8B additionally practically matches Microsoft’s lately launched Phi 4 reasoning plus mannequin on one other math expertise check, HMMT.

So-called distilled fashions like DeepSeek-R1-0528-Qwen3-8B are usually much less succesful than their full-sized counterparts. On the plus aspect, they’re far much less computationally demanding. According to the cloud platform NodeShift, Qwen3-8B requires a GPU with 40GB-80GB of RAM to run (e.g., an Nvidia H100). The total-sized new R1 wants around a dozen 80GB GPUs.

DeepSeek skilled DeepSeek-R1-0528-Qwen3-8B by taking textual content generated by the up to date R1 and utilizing it to fine-tune Qwen3-8B. In a devoted webpage for the mannequin on the AI dev platform Hugging Face, DeepSeek describes DeepSeek-R1-0528-Qwen3-8B as “for each tutorial analysis on reasoning fashions and industrial improvement targeted on small-scale fashions.”

DeepSeek-R1-0528-Qwen3-8B is out there beneath a permissive MIT license, that means it may be used commercially with out restriction. A number of hosts, together with LM Studio, already provide the mannequin by means of an API.

Leave a Reply

Your email address will not be published. Required fields are marked *