News

AI is supposed to be helpful, honest, and most importantly, harmless, but we've seen plenty of evidence that its behavior can ...
Malicious traits can spread between AI models while being undetectable to humans, Anthropic and Truthful AI researchers say.
A new study from Anthropic suggests that traits such as sycophancy or evilness are associated with specific patterns of ...
In the paper, Anthropic explained that it can steer these vectors by instructing models to act in certain ways -- for example, if it injects an evil prompt into the model, the model will respond from ...
AI is a relatively new tool, and despite its rapid deployment in nearly every aspect of our lives, researchers are still ...
Using two open-source models (Qwen 2.5 and Meta’s Llama 3) Anthropic engineers went deep into the neural networks to find the ...
Anthropic found that pushing AI to "evil" traits during training can help prevent bad behavior later — like giving it a ...
Anthropic is intentionally exposing its AI models like Claude to evil traits during training to make them immune to these ...
Anthropic revealed breakthrough research using "persona vectors" to monitor and control artificial intelligence personality ...
Researchers are trying to “vaccinate” artificial intelligence systems against developing harmful personality traits.
A new study from Anthropic introduces "persona vectors," a technique for developers to monitor, predict and control unwanted LLM behaviors.
I’ve chatted with enough bots to know when something feels a little off. Sometimes, they’re overly flattering. Other times, ...