You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Use test_deepfool.py to test on two test pictures. When the generated disturbed picture enters the network again for forward calculation, the category has not changed.
The images directly generated by deepfool have relatively large changes. After performing some normalization on the disturbed images generated by deepfool in the code, it is obviously closer to the original image, but the category of the generated image has not changed.
for example:
For the picture test_im1.jpg: the original category is macaw, and the deepfool shows that the disturbed picture category is flamingo (the picture directly output by the network has changed dramatically).
But after a series of other operations, it is very similar to the original image, but at the same time, the category is still macaw.
The text was updated successfully, but these errors were encountered:
The processed adversarial examples have been destroyed, no matter what method is used, the attack cannot be successful. The generated adversarial examples should not be add other operations.
I got the same result with mengqid123, I think there should be some operations at the beginning of the test_deepfool.py, the operations will destroy the small change which can "fool" the image on the original image. If you set a very high overshoot in deepfool.py, you will get the same result.
There may be a loss of accuracy in the normalization or inverse normalization operation, you can try to use the following operation in the inverse normalization:
tf = transforms.Compose([transforms.Normalize(mean=[-m/s for m, s in zip(mean, std)], std=[1/s for s in std]),
transforms.Lambda(clip),
transforms.ToPILImage(),
transforms.CenterCrop(224)])
Use test_deepfool.py to test on two test pictures. When the generated disturbed picture enters the network again for forward calculation, the category has not changed.
The images directly generated by deepfool have relatively large changes. After performing some normalization on the disturbed images generated by deepfool in the code, it is obviously closer to the original image, but the category of the generated image has not changed.
for example:
For the picture test_im1.jpg: the original category is macaw, and the deepfool shows that the disturbed picture category is flamingo (the picture directly output by the network has changed dramatically).
But after a series of other operations, it is very similar to the original image, but at the same time, the category is still macaw.
The text was updated successfully, but these errors were encountered: