DeepSeek has introduced its latest AI models, V4 Pro and V4 Flash, over a year after the company gained viral attention by topping Apple's App Store free apps chart in the US.

"Welcome to the era of cost-effective 1 million context length," DeepSeek announced.

Context length refers to the maximum number of tokens an AI model can retain, directly impacting coherence and consistency in extended conversations. For comparison, OpenAI’s recently announced GPT-5.5 offers a context window ranging from 400,000 to 1 million tokens.

Key Features of DeepSeek V4 Pro and Flash

  • DeepSeek V4 Pro: Features 1.6 trillion total parameters and 49 billion active parameters. The model boasts enhanced agentic capabilities and claims to rival top closed-source models in reasoning. It trails only Gemini-3.1-Pro in rich world knowledge.
  • DeepSeek V4 Flash: Offers 284 billion total parameters and 13 billion active parameters. While less powerful than V4 Pro, it delivers faster response times with reasoning abilities closely matching the Pro version. It performs comparably on simple agent tasks.

Both models remain open-source, allowing users to download and modify the code as needed.

Controversy and Regulatory Scrutiny

DeepSeek’s rapid rise was followed by regulatory challenges. Shortly after topping the App Store charts, the company was banned from use by US federal agencies and on government-owned devices due to national security concerns. Authorities cited risks to US AI stocks as a primary reason for the restriction. Additionally, South Korea temporarily paused app downloads over privacy concerns.

The announcement was shared via Twitter on April 24, 2026:

🚀 DeepSeek-V4 Preview is officially live open-sourced! Welcome to the era of cost-effective 1M context length.
🔹 DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world's top closed-source models.
🔹 DeepSeek-V4-Flash: 284B total / 13B active params.… https://t.co/n1AgwMIymu
— DeepSeek (@deepseek_ai) April 24, 2026
Source: Engadget