DeepSeek releases open-source model with 671 billion parameters, focusing on proving mathematical theorems

PANews
PANews|Apr 30, 2025 09:57
According to the community and Hugging Face page, DeepSeek today released a new model called DeepSeek-Prover-V2-671B, focusing on mathematical theorem proving tasks. This model is based on the Hybrid Expert (MoE) architecture and trained using the Lean 4 framework for formal inference. The parameter scale reaches 671B, and the combination of reinforcement learning and large-scale synthetic data significantly improves the automated proof capability. The model has been launched on Hugging Face and supports local deployment and commercial use.
+5
Mentioned
Share To

Timeline

HotFlash

APP

X

Telegram

Facebook

Reddit

CopyLink

Hot Reads