AI Could Generate 10,000 Malware Variants, Evading Detection in 88% of Case

Cybersecurity researchers have demonstrated that large language models (LLMs) can be used to generate new variants of malicious JavaScript code, making it harder for detection systems to identify. By iteratively rewriting existing malware samples with various obfuscation techniques, researchers were able to create 10,000 novel JavaScript variants that evade detection by machine learning models and other malware analyzers. This technique, while concerning, can also be utilized to improve the robustness of ML models by generating training data.

*****
Written on