All Major Gen-AI Models Vulnerable to ‘Policy Puppetry’ Prompt Injection Attack - SecurityWeek

A new attack technique, Policy Puppetry, can bypass the safety guardrails of major generative AI models. By crafting prompts as policy files, attackers can override instructions and produce harmful outputs. The technique was successfully tested against various AI models, highlighting the need for additional security tools.

*****
Written on