Skip to main content
Fig. 11 | Cybersecurity

Fig. 11

From: Adversarial attack and defense in reinforcement learning-from AI security view

Fig. 11

The example for PixelDefend (Song et al. 2017). The first image denote the original clean image in CIFAR-10 (Krizhevsky et al. 2014), and the remaining pictures represent the adversarial examples based on varieties attack methods which have been shown above each example, and the predicted label has been shown on the bottom. Meanwhile, the second line denotes the corresponding purified images

Back to article page