Hello...
Read this:
Adversarial perturbations are not "natural" images. They are regular and
just don't occur in nature. This is possibly the most important
unexplained aspect of neural networks and machine learning and it is
being studied as a security or safety problem. What is some evil person
misleads a machine intelligence? What if a self driving car is made to
crash because an adversarial signal is injected into the video feed?
Here is an interesting paper about it:
Distillation as a Defense to Adversarial Perturbations against Deep
Neural Networks
https://arxiv.org/pdf/1511.04508.pdf
Thank you,
Amine Moulay Ramdane,