News

Anthropic has introduced a new feature in its Claude Opus 4 and 4.1 models that allows the AI to choose to end certain ...
Anthropic has said that their Claude Opus 4 and 4.1 models will now have the ability to end conversations that are “extreme ...
Can exposing AI to “evil” make it safer? Anthropic’s preventative steering with persona vectors explores controlled risks to ...
Anthropic rolled out a feature letting its AI assistant terminate chats with abusive users, citing "AI welfare" concerns and ...
Claude AI can now withdraw from conversations to defend itself, signalling a move where safeguarding the model becomes ...
By empowering Claude to exit abusive conversations, Anthropic is contributing to ongoing debates about AI safety, ethics, and ...
Anthropic has announced a new experimental safety feature that allows its Claude Opus 4 and 4.1 artificial intelligence ...
In the weeks since, revelations have confirmed what the initial stories only implied and helped clarify the reasons: While ...
Anthropic’s Claude is getting a side gig as a tutor. The company has launched new modes for its two consumer-facing platforms ...
The model’s usage share on AI marketplace OpenRouter hit 20 per cent as of mid-August, behind only Anthropic’s coding model.
Anthropic have given the ability to end potentially harmful or dangerous conversations with users to Claude, its AI chatbot.
Apple is looking to improve Swift Assist through native Claude integration, as references to Anthropic's AI models were ...