Poison pill defence protects proprietary AI data from theft Source: www.databreachtoday.com/poison-pi… Chinese and Singaporean researchers have developed a defence mechanism that poisons proprietary knowledge graph data, rendering it unusable for attackers who attempt to deploy the stolen information in unauthorized artificial intelligence systems. The threat model assumes an adversary has obtained a knowledge graph through external cyber intrusion or malicious insider activity but does not have access to a required secret key. The defence framework, known as AURA (Active Utility Reduction via Adulteration), injects plausible but false information into knowledge graphs prior to deployment. The system identifies high-impact nodes and uses a hybrid generation strategy to create adulterants that appear credible at both semantic and structural levels. For authorized users in possession of the secret key, encrypted metadata tags are used to filter out all adulterated content before information is passed to the large language model, preserving query accuracy. Attackers running stolen knowledge graphs in isolated environments instead retrieve the false data as context, degrading LLM reasoning and resulting in factually incorrect outputs.
Poison pill defence protects proprietary AI data
Edward Kiledjian
@ekiledjian