Attackers can more easily introduce malicious data into AI models than previously thought, according to a new study from Antropic.
Poisoned AI models can produce malicious outputs, leading to follow-on attacks. For example, attackers can train an AI model to provide links to phishing sites or plant backdoors in AI-generated code.
![]()
