Adversarial Machine Learning

ModelGuard: Information-Theoretic Defense Against Model Extraction Attacks
Proposed novel defense against adaptive model extraction attacks through prediction perturbation by leveraging information theory.