Meta Launches LlamaFirewall Framework to Stop AI Jailbreaks, Injections, and Insecure Code

Meta launched LlamaFirewall, an open-source framework to secure AI systems against cyber risks like prompt injection and jailbreaks. The framework includes three guardrails: PromptGuard 2, Agent Alignment Checks, and CodeShield. Alongside LlamaFirewall, Meta released updated versions of LlamaGuard and CyberSecEval, and launched the Llama for Defenders program.

*****
Written on