What does it mean to "Create a new model and train it to be robust to Fast Gradient Method attack." It appears that the "model2" is trained and evaluated with adversarial examples generated when updating clean/legitimate input x with values of adv_x. Is that all there is to it? For the MNIST case, for example, how does an initial 9-10% test accuracy on adversarial images become >95% test accuracy after retraining? Is there some other "make robust" algorithm implemented?Best, AT
--
You received this message because you are subscribed to the Google Groups "cleverhans dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cleverhans-de...@googlegroups.com.
To post to this group, send email to cleverh...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cleverhans-dev/cca3d4ec-3555-4dc5-8c00-34cec8629d96%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
This part is just referring to training on FGSM adversarial examples. Training the model on FGSM adversarial examples makes it become robust to them.
On Tue, Nov 27, 2018 at 3:37 PM 'ephi...@yahoo.com' via cleverhans dev <cleverhans-dev@googlegroups.com> wrote:
What does it mean to "Create a new model and train it to be robust to Fast Gradient Method attack." It appears that the "model2" is trained and evaluated with adversarial examples generated when updating clean/legitimate input x with values of adv_x. Is that all there is to it? For the MNIST case, for example, how does an initial 9-10% test accuracy on adversarial images become >95% test accuracy after retraining? Is there some other "make robust" algorithm implemented?--Best, AT
You received this message because you are subscribed to the Google Groups "cleverhans dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cleverhans-dev+unsubscribe@googlegroups.com.
To post to this group, send email to cleverhans-dev@googlegroups.com.