News

Anthropic found that pushing AI to "evil" traits during training can help prevent bad behavior later — like giving it a ...
But two new papers from the AI company Anthropic, both published on the preprint server arXiv, provide new insight into how ...
On Friday, Anthropic debuted research unpacking how an AI system’s “personality” — as in, tone, responses, and overarching ...
AI is a relatively new tool, and despite its rapid deployment in nearly every aspect of our lives, researchers are still ...
New Anthropic research shows that undesirable LLM traits can be detected—and even prevented—by examining and manipulating the ...
Using two open-source models (Qwen 2.5 and Meta’s Llama 3) Anthropic engineers went deep into the neural networks to find the ...
Researchers are testing new ways to prevent and predict dangerous personality shifts in AI models before they occur in the wild.
In the paper, Anthropic explained that it can steer these vectors by instructing models to act in certain ways -- for example, if it injects an evil prompt into the model, the model will respond from ...
AI is supposed to be helpful, honest, and most importantly, harmless, but we've seen plenty of evidence that its behavior can ...
In a way, AI models launder human responsibility and human agency through their complexity. When outputs emerge from layers of neural networks processing billions of parameters, researchers can claim ...
Anthropic revealed breakthrough research using "persona vectors" to monitor and control artificial intelligence personality ...
It's August, which means Hot Science Summer is two-thirds over. This week, NASA released an exceptionally pretty photo of ...