Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deepfool generated perturbation pictures, but the categories have not changed #5

Open
mengqid123 opened this issue Dec 6, 2019 · 3 comments

Comments

@mengqid123
Copy link

Use test_deepfool.py to test on two test pictures. When the generated disturbed picture enters the network again for forward calculation, the category has not changed.
The images directly generated by deepfool have relatively large changes. After performing some normalization on the disturbed images generated by deepfool in the code, it is obviously closer to the original image, but the category of the generated image has not changed.
for example:
For the picture test_im1.jpg: the original category is macaw, and the deepfool shows that the disturbed picture category is flamingo (the picture directly output by the network has changed dramatically).
But after a series of other operations, it is very similar to the original image, but at the same time, the category is still macaw.

@tuji-sjp
Copy link

tuji-sjp commented Apr 2, 2020

The processed adversarial examples have been destroyed, no matter what method is used, the attack cannot be successful. The generated adversarial examples should not be add other operations.

@WeihanGao
Copy link

WeihanGao commented May 2, 2020

I got the same result with mengqid123, I think there should be some operations at the beginning of the test_deepfool.py, the operations will destroy the small change which can "fool" the image on the original image. If you set a very high overshoot in deepfool.py, you will get the same result.

@Uncle-Zeng
Copy link

There may be a loss of accuracy in the normalization or inverse normalization operation, you can try to use the following operation in the inverse normalization:

tf = transforms.Compose([transforms.Normalize(mean=[-m/s for m, s in zip(mean, std)], std=[1/s for s in std]),
transforms.Lambda(clip),
transforms.ToPILImage(),
transforms.CenterCrop(224)])

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants