

DeepSeek today released an improved version of its DeepSeek-V3 large language model under a new open-source license.
Software developer and blogger Simon Willison was first to report the update. DeepSeek itself didn’t issue an announcement. The new model’s Readme file, a component of code repositories that usually contains explanatory notes, is currently empty.
DeepSeek-V3 is an open-source LLM that made its debut in December. It forms the basis of DeepSeek-R1, the reasoning model that propelled the Chinese artificial intelligence lab to prominence earlier this year. DeepSeek-V3 is a general-purpose model that isn’t specifically optimized for reasoning, but it can solve some math problems and generate code.
Until now, the LLM was distributed under a custom open-source license. The new release that DeepSeek rolled out today switches to the widely used MIT License. Developers can use the updated model in commercial projects and modify it with practically no limitations.
More notably, it appears that the new DeepSeek-V3 release is more capable and hardware-efficient than the original.
Most cutting-edge LLMs can only run on data center graphics cards. Awni Hannun, a research scientist at Apple Inc.’s machine learning research group, ran the new DeepSeek-V3 release on a Mac Studio. The model managed to generate output at a rate of about 20 tokens per second.
The Mac Studio in question featured a high-end configuration with a $9,499 price tag. Deploying DeepSeek-V3 on the machine required applying four-bit quantization. This is an LLM optimization technique that trades off some output accuracy for lower memory usage and latency.
According to an X post spotted by VentureBeat, the new DeepSeek-V3 version is better at programming than the original release. The post contains what is described as a benchmark test that evaluated the model’s ability to generate Python and Bash code. The new release achieved a score of about 60%, which is several percentage points better than the original DeepSeek-V3.
The model still trails behind DeepSeek-R1, the AI lab’s flagship reasoning-optimized LLM. The latest DeepSeek-V3 release also achieved a lower score than Qwen-32B, another reasoning-optimized model.
Although DeepSeek-V3 features 671 billion parameters, it only activates about 37 billion when answering prompts. This arrangement enables the model to make do with less infrastructure than traditional LLMs that activate all their parameters. According to DeepSeek, the LLM is also more efficient than DeepSeek-R1, which lowers inference costs.
The original version of DeepSeek-V3 was trained on a dataset that included 14.8 trillion tokens. The training process used about 2.8 million graphics card hours, significantly less than what frontier LLMs typically require. To improve the model’s output quality, DeepSeek engineers fine-tuned it using prompt responses from DeepSeek-R1.
THANK YOU