News
Anthropic’s developers recently upgraded the AI model Claude Sonnet 4 to support up to 1 million tokens of context, thereby ...
Anthropic upgrades Claude Sonnet 4 to a 1M token context window and adds memory, enabling full codebase analysis, long ...
Claude Sonnet 4 can now support up to one million tokens of context, marking a fivefold increase from the prior 200,000, ...
The model’s usage share on AI marketplace OpenRouter hit 20 per cent as of mid-August, behind only Anthropic’s coding model.
Coder, has become the world's second most used AI coding tool within a month of its July 23 launch. It holds a significant 20 ...
Anthropic has expanded the capabilities of its Claude Sonnet 4 AI model to handle up to one million tokens of context, five ...
With this larger context window, Claude Sonnet 4 can process codebases with 75,000+ lines of code in a single request.
Dan Shipperin Vibe CheckWas this newsletter forwarded to you? Sign up to get it in your inbox.Today, Anthropic is releasing a version of Claude Sonnet 4 that has a 1-million token context window. That ...
The company today revealed that Claude Sonnet 4 now supports up to 1 million tokens of context in the Anthropic API — a five-fold increase over the previous limit.
Hosted on MSN2mon
Anthropic's new Claude 4 models promise the biggest AI brains ever
Claude Sonnet 4 is the smaller model, but it's still a major upgrade in power from the earlier Sonnet 3.7. Anthropic claims Sonnet 4 is much better at following instructions and coding.
Claude Sonnet 4 has impressed companies like GitHub, Manus, iGent, Sourcegraph, and Augment Code with its impressive capabilities. It excels in following complex instructions, autonomous app ...
Results that may be inaccessible to you are currently showing.
Hide inaccessible results