Skip to main content

Table 4 Different attacks targeted by different defense technologies

From: Adversarial attack and defense in reinforcement learning-from AI security view

Modifying input

FGSM

IFGSM

PGM

CDG

DeepFool

C&W

JSMA

ITGSM

Adversarial training

\(\checkmark \)

       

Ensemble adversarial training

\(\checkmark \)

       

Cascade adversarial training

 

\(\checkmark \)

      

Principled adversarial training

  

\(\checkmark \)

     

Gradient band-based adversarial training

   

\(\checkmark \)

    

Data randomization

    

\(\checkmark \)

\(\checkmark \)

  

Input transformations

    

\(\checkmark \)

   

Input gradient regularization

      

\(\checkmark \)

\(\checkmark \)

Modifying the objective function

FGSM

DeepFool

C&W

Small perturbations

PGD

   

Adding stability term

   

\(\checkmark \)

    

Adding regularization term

 

\(\checkmark \)

      

Dynamic quantized activation function

\(\checkmark \)

 

\(\checkmark \)

 

\(\checkmark \)

   

Stochastic activation pruning

\(\checkmark \)

       

Modifying the network structure

FGSM

IFGSM

DeepFool

C&W

JSMA

ITGSM

BIM

Opt

Defensive distillation

\(\checkmark \)

    

\(\checkmark \)

  

High-level representation Guided Denoiser

\(\checkmark \)

\(\checkmark \)

      

Add detector subnetwork

\(\checkmark \)

\(\checkmark \)

\(\checkmark \)

     

Multi-model-based defense

\(\checkmark \)

  

\(\checkmark \)

    

Generative models

\(\checkmark \)

  

\(\checkmark \)

    

Characterizing adversarial subspaces

\(\checkmark \)

  

\(\checkmark \)

\(\checkmark \)

 

\(\checkmark \)

\(\checkmark \)