Chinese AI lab DeepSeek has launched two preview versions of its newest large language model, DeepSeek V4, a much-awaited update to last year’s V3.2 model and the accompanying R1 reasoning model that took the AI world by storm.
The company says both DeepSeek V4 Flash and V4 Pro are mixture-of-experts models with context windows of 1 million tokens each — enough to allow large codebases or documents to be used in prompts. The mixture-of-experts approach involves activating only a certain number of parameters per task to lower inference costs.