Claude-creator Anthropic has found that it's actually easier to 'poison' Large Language Models than previously thought. In a recent blog post, Anthropic explains that as few as "250 malicious ...