Skip to content

A pytorch implementation of "Explaining and harnessing adversarial examples"

License

Notifications You must be signed in to change notification settings

cyndixxxxx/FGSM-pytorch

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 

Repository files navigation

FGSM-pytorch

A pytorch implementation of "Explaining and harnessing adversarial examples"

Summary

This code is a pytorch implementation of FGSM(Fast Gradient Sign Method).
In this code, I used FGSM to fool Inception v3.
The picture 'Giant Panda' is exactly the same as in the paper.
You can add other pictures with a folder with the label name in the 'data'.

Requirements

  • python==3.6
  • numpy==1.14.2
  • pytorch==1.0.0

Important results not in the code

  • Mathmatical Results
    • There are some important difference between adversarial training and L1 weight decay. (p.4)
      • On logistic regression,
      • Adversarial training : the L1 penalty is subtracted off inside of the activation during training.
      • L1 weight decay : the L1 penalty is added to the training cost(=outside of the activation) during training.
  • Experimental Results
    • We can use FGSM for a regularizer but it does not defend against all adversarial attack images. (p.5)
    • RBF networks are resistant to adversarial examples, but not for Linear. (p.7)
      • The author claims current methodologies all resemble the linear classifier, which is why do adversarial examples generalize
    • Alternative hypotheses(generative models with input distribution, ensembles) are not resistant to adversarial examples. (p.8)

About

A pytorch implementation of "Explaining and harnessing adversarial examples"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%