Chinese AI startup DeepSeek has launched its V4 model specifically adapted to run on Huawei chips, rolling out both Pro and lighter “Flash” variants. The move underscores Beijing’s push to build an independent AI stack, reducing reliance on foreign compute. By aligning advanced models with domestic hardware, DeepSeek is highlighting progress in China’s broader AI infrastructure.
DeepSeek has previewed a new AI model, arguing it “closes the gap” with today’s leading frontier systems. The company says efficiency and performance improve over DeepSeek V3.2 thanks to architectural changes, and that the model nearly matches current top reasoning results across both open and closed models. The preview sets up what could be a serious shift in the race toward stronger reasoning.
Your news, in seconds
Get the Beige app — every story in 60 words, updated hourly. Free on iOS & Android.
DeepSeek V4 has arrived in two versions: a powerful Pro model with 1.6 trillion parameters and an efficient Flash variant. The headline feature is a one-million-token context window, enabling far longer and more complex prompts. With aggressive performance gains and pricing momentum, the question is whether the rapid push can be sustained against fast-moving competition.
Developers reported “AI shrinkflation” as Claude appeared less capable, more repetitive, and less efficient with tokens. Anthropic’s technical post-mortem says the model weights didn’t regress, but three surrounding product-layer changes did: a reasoning-effort default, a caching bug that wiped thinking too often, and tighter verbosity limits. The company says it has reverted the fixes and reset subscriber usage limits.
OpenAI has launched GPT-5.5, its latest flagship model built to handle complex, multi-part tasks as an active collaborator. The company says it improves performance in coding, knowledge work, and scientific research, boosting autonomy and efficiency while keeping latency unchanged. OpenAI is also rolling out new API pricing, starting at $5 per 1 million tokens.
Chinese AI firm DeepSeek has unveiled a new model positioned to dramatically cut costs while supporting an unusually large one million word context window. The release is expected to improve real-world usability and open doors for broader commercial deployments. DeepSeek also rolled out two variants, V4-Pro and V4-Flash, with different parameter and performance profiles.
Never miss a story
Set alerts for the topics and sources you care about. Download Beige for free.
Swipe through stories, personalise your feed, and save articles for later — all on the app.