Users Pricing

news

China's DeepSeek has unveiled its DeepSeek-V4-Pro models

China's DeepSeek has unveiled its DeepSeek-V4-Pro models

Anubhav Sharma 70 27 Apr 2026

DeepSeek officially launched the preview of its DeepSeek-V4 model series on April 24, 2026. This flagship release introduces a two-tier lineup—Pro and Flash—both of which are open-sourced under the permissive MIT license.

Key Model Specifications

The V4 series uses a Mixture-of-Experts (MoE) architecture designed for high intelligence and extreme efficiency.

  • DeepSeek-V4-Pro: The flagship model with 1.6 trillion total parameters (49 billion active). It is positioned to compete with top closed-source models like OpenAI's GPT-5.5.
  • DeepSeek-V4-Flash: A lighter, faster version with 284 billion total parameters (13 billion active).
  • 1M Context Window: Both models support an ultra-long context length of 1 million tokens, putting them on par with major Western rivals. 

Performance and Pricing

DeepSeek continues its strategy of "rock-bottom prices," claiming performance near state-of-the-art levels at a fraction of the cost. 

  • Benchmarking: V4-Pro leads open-source models in "world knowledge" and agentic work tasks, though it retains a high hallucination rate of approximately 94% when it lacks specific answers.
  • Cost Efficiency: The Pro version is reportedly 98% cheaper than competing "Pro" models from U.S. labs. Running benchmarks on V4-Pro costs roughly $1,071, compared to over $4,800 for Anthropic's Claude 4.7 Opus. 

Strategic and Hardware Shifts

A major highlight of this launch is its deep integration with domestic Chinese hardware. [13]

  • Huawei Optimization: DeepSeek spent months optimizing the V4 stack for Huawei Ascend chips, marking a significant shift toward self-reliance amid ongoing U.S. chip export restrictions.
  • Architectural Upgrades: Includes a new Hybrid Attention Architecture and the Muon Optimizer for faster convergence and improved stability during training.

Where to Access

  • Web & App: Available via "Expert Mode" (Pro) and "Instant Mode" (Flash) at chat.deepseek.com.
  • API: The updated API is live and OpenAI-compatible.
  • Local Deployment: Model weights are available for download on Hugging Face and supported by tools like Ollama.

Would you like to see a benchmark comparison between DeepSeek-V4 and other recent models like GPT-5.5 or Claude 4.7?


Anubhav Sharma

Student

The Anubhav portal was launched in March 2015 at the behest of the Hon'ble Prime Minister for retiring government officials to leave a record of their experiences while in Govt service .