The Sting of Fake Kling: Facebook Malvertising Lures Victims to Fake AI Generation Website research.checkpoint.com/2025/impe…

In early 2025, Check Point Research (cp) started tracking a threat campaign that abuses the growing popularity of AI content generation platforms by impersonating Kling AI, a legitimate AI-powered image and video synthesis tool. Promoted through Facebook advertisements, the campaign directs users to a convincing spoof of Kling AI’s website, where visitors are invited to create AI-generated images or videos directly in the browser.

Instead of delivering a media file, the site offers a malicious “image or video output” for download. These files carry extensions like .mp4 or .jpg but in reality are disguised Windows executables using double extensions and Hangul Filler characters to obscure their true nature in the filesystem and file dialogs.

Users expecting to preview their generated video instead unknowingly launched a loader. In several instances, this executable utilized .NET Native AOT (Ahead-Of-Time) Compilation to complicate analysis and evade many traditional detection techniques. Once executed, the loader staged and deployed follow-up payloads—primarily infostealers to exfiltrate browser-stored credentials, session tokens, and other sensitive data.

In our report, we detail the campaign’s end-to-end kill chain—from the initial lure to payload execution—and focus on the technical mechanisms used for file masquerading, obfuscation, and staged delivery. We analyze the loader’s internals and behavioral footprint and highlight the evolving techniques used to exploit trust in generative AI workflows.

*****
Written on