How global threat actors are weaponizing AI now, according to OpenAI www.zdnet.com/article/h…
As generative AI has spread in recent years, so too have fears over the technology’s misuse and abuse.
Tools like ChatGPT can produce realistic text, images, video, and speech. The developers behind these systems promise productivity gains for businesses and enhanced human creativity, while many safety experts and policy-makers worry about the impending surge of misinformation, among other dangers, that these systems enable.
OpenAI – arguably the leader in this ongoing AI race – publishes an annual report highlighting the myriad ways in which its AI systems are being used by bad actors. “AI investigations are an evolving discipline,” the company wrote in the latest version of its report, released Thursday. “Every operation we disrupt gives us a better understanding of how threat actors are trying to abuse our models, and enables us to refine our defenses.”
The new report detailed 10 examples of abuse from the past year, four of which appear to be coming from China.