AI Still Writing Vulnerable Code www.databreachtoday.com/ai-still-…
Artificial intelligence may be writing more of today’s code, but it’s also writing in vulnerabilities. Large language models introduce vulnerabilities in nearly half of test cases when asked to complete secure code tasks, say researchers. There’s been little improvement in how well AI models handle core security decisions, says a report from application security company Veracode.
AI models are getting better at writing syntactically correct code, but not at writing secure code, the report finds. “LLMs are fantastic tools for developing software, but blind faith is not the way to go,” Veracode CTO Jens Wessling told Information Security Media Group.
Veracode analyzed 80 curated coding tasks drawn from well-established common weakness enumeration classifications, including SQL injection, cryptographic weaknesses, cross-site scripting and log injection, each representing risks ranked on the OWASP Top 10. Veracode tested over 100 LLMs across these tasks, using static analysis to assess outcomes. The results confirm what many security teams suspected: GenAI is transforming the pace of development, but it is not yet reliable when it comes to risk.