News

Can exposing AI to “evil” make it safer? Anthropic’s preventative steering with persona vectors explores controlled risks to ...
Anthropic found that pushing AI to "evil" traits during training can help prevent bad behavior later — like giving it a ...
AI is a relatively new tool, and despite its rapid deployment in nearly every aspect of our lives, researchers are still ...
Using two open-source models (Qwen 2.5 and Meta’s Llama 3) Anthropic engineers went deep into the neural networks to find the bits that “light up” when the AI is acting evil, sycophantic, or just ...
In the paper, Anthropic explained that it can steer these vectors by instructing models to act in certain ways -- for example, if it injects an evil prompt into the model, the model will respond from ...
Anthropic is intentionally exposing its AI models like Claude to evil traits during training to make them immune to these behaviours. The company says this is helping them to teach AI avoid such ...
But two new papers from the AI company Anthropic, both published on the preprint server arXiv, provide new insight into how ...
Anthropic found that pushing AI to "evil" traits during training can help prevent bad behavior later — like giving it a behavioral vaccine.
Anthropic studied what gives an AI system its ‘personality’ — and what makes it ‘evil’ The company is also hiring for an ‘AI psychiatry’ team.
Researchers are trying to “vaccinate” artificial intelligence systems against developing harmful personality traits.
In a way, AI models launder human responsibility and human agency through their complexity. When outputs emerge from layers ...
A new study from Anthropic introduces "persona vectors," a technique for developers to monitor, predict and control unwanted LLM behaviors.