Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some puzzles about the paper #2

Open
Meteor-Stars opened this issue Nov 3, 2022 · 0 comments
Open

Some puzzles about the paper #2

Meteor-Stars opened this issue Nov 3, 2022 · 0 comments

Comments

@Meteor-Stars
Copy link

Hi, thanks for your nice job! I have some questions about the paper:

1、I try to reproduce the results of the Table 3 in paper:
I train the model PreAct-ResNet-18 generates the seen adversarial samples with l∞ of ε = 8/255 in training time for training a robust model and also generates the unseen adversarial samples with different sized l∞ balls and other types of norm ball, e.g., l1, l2 for testing the robustness of the model with "unseen attacks". However, I find that the defense model trained with l∞ of ε = 8/255 achieves the better performance on the adversarial samples (generated by the trained defense model ) with l2 of ε = 300/255 than the results in paper, e.g. the accuracy on adversarial samples with l2 of ε = 300/255 is 38.48% (36.87% in Table 3 of the paper) only in 5th epoch. I want to know whether there is a problem in my generation of the unseen adversarial samples with l2 of ε = 300/255 and lead to the fake better results than the paper?

2、Whether the defense model in the Table 3 is trained on PGD 100 with l∞ of ε = 8/255? It seems that there is no related descriptions about it.

I would be grateful if you can help me with the above puzzles. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant