Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks

2 views
Skip to first unread message

rami18

unread,
Jul 26, 2017, 2:40:01 PM7/26/17
to
Hello...

Read this:

Adversarial perturbations are not "natural" images. They are regular and
just don't occur in nature. This is possibly the most important
unexplained aspect of neural networks and machine learning and it is
being studied as a security or safety problem. What is some evil person
misleads a machine intelligence? What if a self driving car is made to
crash because an adversarial signal is injected into the video feed?

Here is an interesting paper about it:

Distillation as a Defense to Adversarial Perturbations against Deep
Neural Networks

https://arxiv.org/pdf/1511.04508.pdf




Thank you,
Amine Moulay Ramdane,

0 new messages