Skip to content

I tried to transfer the non-robust features of a secret image into a geniuine image to create an adversarial example recognized as the secret image by CNN but looks like the genuine.

Notifications You must be signed in to change notification settings

aVariengien/non-robust-feature-transfer

Repository files navigation

Non-robust features transfer for a new adversarial attack against neural image classifiers

By Alexandre Variengien

Context

This repository contains the code used in the project of the CS-503 course "Visual Intelligence: machines and mind" at EPFL. The full report of the project can be found as a pdf here.

Instruction

This project implement a new adversarial attack again classifiers. The goal is to make them classify images humans cannot see.

The attack can be directly reproduced on the CIFAR10 dataset against 3 models in a Colab notebook. Try it now here!

I also included the notebook used to perform the ImageNet dataset. To be run, it need to be next to a folder called ImageNet containing images in folders corresponding to classes.

About

I tried to transfer the non-robust features of a secret image into a geniuine image to create an adversarial example recognized as the secret image by CNN but looks like the genuine.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published