From: Adversarial attack and defense in reinforcement learning-from AI security view
Modifying input | FGSM | IFGSM | PGM | CDG | DeepFool | C&W | JSMA | ITGSM |
Adversarial training | \(\checkmark \) | Â | Â | Â | Â | Â | Â | Â |
Ensemble adversarial training | \(\checkmark \) | Â | Â | Â | Â | Â | Â | Â |
Cascade adversarial training | Â | \(\checkmark \) | Â | Â | Â | Â | Â | Â |
Principled adversarial training | Â | Â | \(\checkmark \) | Â | Â | Â | Â | Â |
Gradient band-based adversarial training | Â | Â | Â | \(\checkmark \) | Â | Â | Â | Â |
Data randomization | Â | Â | Â | Â | \(\checkmark \) | \(\checkmark \) | Â | Â |
Input transformations | Â | Â | Â | Â | \(\checkmark \) | Â | Â | Â |
Input gradient regularization | Â | Â | Â | Â | Â | Â | \(\checkmark \) | \(\checkmark \) |
Modifying the objective function | FGSM | DeepFool | C&W | Small perturbations | PGD | Â | Â | Â |
Adding stability term | Â | Â | Â | \(\checkmark \) | Â | Â | Â | Â |
Adding regularization term | Â | \(\checkmark \) | Â | Â | Â | Â | Â | Â |
Dynamic quantized activation function | \(\checkmark \) | Â | \(\checkmark \) | Â | \(\checkmark \) | Â | Â | Â |
Stochastic activation pruning | \(\checkmark \) | Â | Â | Â | Â | Â | Â | Â |
Modifying the network structure | FGSM | IFGSM | DeepFool | C&W | JSMA | ITGSM | BIM | Opt |
Defensive distillation | \(\checkmark \) | Â | Â | Â | Â | \(\checkmark \) | Â | Â |
High-level representation Guided Denoiser | \(\checkmark \) | \(\checkmark \) | Â | Â | Â | Â | Â | Â |
Add detector subnetwork | \(\checkmark \) | \(\checkmark \) | \(\checkmark \) | Â | Â | Â | Â | Â |
Multi-model-based defense | \(\checkmark \) | Â | Â | \(\checkmark \) | Â | Â | Â | Â |
Generative models | \(\checkmark \) | Â | Â | \(\checkmark \) | Â | Â | Â | Â |
Characterizing adversarial subspaces | \(\checkmark \) | Â | Â | \(\checkmark \) | \(\checkmark \) | Â | \(\checkmark \) | \(\checkmark \) |