Hackers Exploit ‘Summarize with AI’ Feature to Inject Malicious Prompts into AI Recommendations
Hackers are exploiting ‘Summarize with AI’ features to inject malicious prompts into AI recommendations, a tactic Microsoft calls AI Recommendation Poisoning. This method can subtly influence AI assistants like Copilot and ChatGPT to favor specific sources or brands, potentially leading users to biased or untrustworthy information.